Economics -munotes

Page 1

1MODULE 1
1
THE CONCEPT OF A RANDOM VARIABLE
Unit Structure :
1.0 Objectives
1.1 Introduction
1.2 Types of Random Variables
1.3 Mean of a random variable
1.4 Variance of a random variable
1.5 Basic Laws of probability
1.6 Types of Discrete random var iables
1.7 Continuous distribution
1.8 Reference
1.0 OBJECTIVES
After going to this module you will be able :
To understand concept of random variable
To understand various types of random variable
To understand the meaning of covariance and correlat ion
1.1 INTRODUCTION
A random variable is a variable whose value is not known or
a function that assigns values to each of an experiment’s outcome.
Random variables are oflenly used in econometric or regression
analysis to determine statistical relatio nships between two or more
variables.
Random variables are associated with random processes
where a random process is an event or experiment that has a
random outcome. For e.g. rolling a die, tossing a coin, choosing a
card or any one of the other possi bilities. It is something which we
would guess but cannot predict the exact outcomes. So we have to
calculate the probability of a particular outcome.
Random variables are denoted by capital letters for e.g. ‘X’,
‘Y’ where it usually refers to the proba bility of getting a certain
outcome. Random variables give numbers to outcomes of random
events. It means though an event is random but its outcome ismunotes.in

Page 2

2quantifiable. For e.g. rolling a die. Let’s say we wanted to know how
many sixes we will get if we roll a die for a certain number of times.
In this case random variable X could be equal to 1 if we get a six &
Oi fw eg e ta n yo t h e rn u m b e r .
Let us discuss another example of a random variable i.e. the
outcome of a coin toss. Let us assume that the probabilit y
distribution in which the outcomes of a random event are not
equally likely to happen. If random variable, Y, is the number of
heads we get from tossing two coins, then Y could be 0, 1 or 2.
This means that we could have no heads, one head or both heads
on a two -coin toss. However, the two coins land in four different
ways. TT, HT, TH and HH. Therefore, the 104PY.
Since we have one chance of getting no heads (i.e. two fails
(TT) when the coins are tossed. Similarly, the probabili ty of getting
two heads (HH) is also 1/4 . So getting one head has a likelihood of
occurring two times : HT and TH. In this case,12 / 4 1 / 2PY.
1.2 TYPES OF RANDOM VARIABLES
There are two types of random variables :
A)Discrete random var iables
B)Continuous random variables
Discrete random variables take into account a countable
number of distinct values. For e.g. an experiment of a coin tossed
for three times. If X represents the number of times that the
outcome will come up heads, t hen X is a discrete random variable
that can only have values 0, 1, 2, 3 (from no heads in thee
successive coin tosses to all heads). No other value is possible for
X.
Continuous random variables can represent any value within
a specified range or inter val and can take on an infinite number of
possible values. E.g. an experiment that involves measuring the
amount of rainfall in a city over a year or overage height or weight
of a random group of 100 people.
1.3 MEAN OF A RANDOM VARIABLE
The mean of a discrete random variable X is a weighted
average of the possible values that the random variable can take.
The mean of a random variable weights each outcomexiaccording to its probability,pi.T h e r e f o r ee xpected value of X isand formula ismunotes.in

Page 3

3............. 11 2 2xp x p x pkkxpii
The mean of a random variable provides the long -run
average of the variable, or the expected average outcome over
many observations.
For a continuous random variable, the mean is defined by
the density curve of the distribution. For a symmetric density curve,
such as normal distribution, the mean lies at the center of the
curve.
1.4 VARIANCE OF A RANDOM VARIABLE
The variance of a discrete random variable X measures the
spread, or variability of the distribution and is defined by
22xpixiX  
The standard deviationis the square root of the variance.
1.5 BASIC LAWS OF PROBABILITY
Probability is defined as an u m b e rb e t w e e n0a n d1
representing the likelihood of an event happening. A probability of 0
indicates no chance of that event occurring, whereas a probability
of 1 means the event will occur.
Basic Properties of Probability Rules :
Every probability is between 1 and 1. In other words, if AA is an
event, then0 10 1PA PA.
The sum of the probabilities of all the outcomes is one. For e.g.
if all the outcomes in the sample space are denoted byAAiithen1AAii.
Impossible events have probability zero. If event AA is
impossible, then0PA O PA.
Certain events have probability 1. If event AA is certain to occur,
then1PA I PA.
The Probability Rules :
1)Rule 1 : Whenever an event is the union of two other events,
the Addition Rule will apply. If AA & BB are two events, thenP A B P A P B P A and B P A or B P A P B P A and BUmunotes.in

Page 4

4If can be written as :PA B PA PB PA BPA B PA PB PA BUI UI
2)Rule 2 : Whenever an event is complement of another event,
the complementary rule will apply. If AA is an event then we have
the following rule.11P not A P A P not A P A 
This is also written as 11PA PAPA PA  
3)Rule 3 : Whenever partial knowledge of an event is available,
then the condition rule will be applied. If event AA is already known
to have occurred and probability of event BB is desired, then we will
have the following rule.P B given A P A and B P A P B given A P A and B P A 
Where it is further written as :PBA PA BPAPBA PA BPA  II
4)Rule 4 : When ever an event is the intersection of two other
events, the Multiplication rule will apply. If events AA and BB need
to occur simultaneously then, we have the following rule.P A and B P A P B given A P A and B P A P B given A  It is also written as :PA B PAPBAPA B PAPBA  II
Let us discuss these rules with the help of an example of
rolling a dice. Suppose we roll two dice.
1)The prob ability that both dice are 5 is :
P (both are 5) = P(first is a 5 and second is a 5) P(both are 5) =
P(first is a 5 and second is a 5) 12
= P(first is a 5) P(second is a 5, given first is a 5) = 1.6.16 = 136 =
P(first is a 5) P(second is a 5, given first is a 5) = 16.16 = 136
Here the word ‘both’ indicates two events had to happen at
the same time, i.e. the first event and the second e vent. We usedmunotes.in

Page 5

5the multiplication rule because of the key word ‘and’. The first factor
resulted from the Basic Rule on a single die.
2)The probability that at least one die is a 5 is :
P(at least one is a 5) = P(first is a 5 or second is a 5) P(at least one
is a 5) = P(first is a 5 or second is a 5) 12
= P(first is a 5) + P(second is a 5) -P(first is a 5 and second is a 5)
=1 6+1 6 -136 = 1136 = P(first is a 5) + P(second is a 5) -P(first is
a 5 and second is a 5) = 16 + 16 -136 = 1136
“First we had to recognize that the event at least one” could
be fulfilled by one or the other of two separate cases. We used
‘Addition rule’ because of the word ‘or’. The first two terms come
from the Basic Rule on a single die, while the third term resulted
from o nly one outcome where both dice will be 5.
3)The probability that neither die is a 5 is :
P(neither is a 5) = 1 -P(at least one is a 5) = 1 -1136 = 2536
P(neither is a 5) = 1 -P(at least one is a 5) = 1 -1136 = 2536.
In this case, the word “neither ” is complementary to the word “at
least one” so we used the Complementary rule.
4)Given that at least one of the dice is a 5, the proba bility that
the other is a 5 is:
P(other is a 5 / at least one is a 5) = P(both are 5) P(at least one is
a5 )=1 3 6 1 136 = 111P(other is a 5 / at least one is a 5) = P(both
are 5) P(at least one is a 5) = 1361136 = 111.
The partial knowledge required the conditional rule .
1.6 TYPES OF DISCRETE RANDOM VARIABLES
When solving problems. we should be able to recognize a
random variable which fits one of the formats of Discrete random
variables.
1)Bernoulli Random Variable :Is the simple kind of random
variable. It can take on two values 1 and 0. If an experiment with
probability P resulted in success then it takes o na1a n d0i ft h e
result is failed. For e.g. If the shooter hits the target, we call it a
‘success’ and he misses it then we call it a ‘failure’. Let us assume
that whether the shooter hits or misses the particular target on any
particular attempt has not hing to do with his success or failure on
any other attempts. In this case we are ruling out the possibility of
improvement by the shooter with practise. Assuming probability of a
success is P and that of failure is 1 -p, where p is a constantmunotes.in

Page 6

6between valu es 1 and 0. A random variable that take value 1 in
case of success and 0 in case of failure is called Bernoulli random
variable.
The Bernoulli distribution with parameter P if its probability
mass function (pmf) is 11, 0 , 1x xPx PX p x 
Where,0, 1xP x pand
if1,xP x p.
Conditions for Bernoulli trials
1)A finite number of trials.
2)Each trial should have exactly two outcomes success or failure.
3)Trials should be independent.
4)The probability of su ccess or failure should be the same in each
trial.
For e.g. -Tossing a coin. Suppose, for a Bernoulli random
variable,0.4p.T h e n0 0.6, 1 0.4pp  .
Suppose the coin is tossed for four times. The event that the
outcome will be Head on the first trial, and Tail on the next two and
again Head on the last can be represented as :1, 0, 0,1S
The probability with which the outcome is Head is P,
whereas the probability with which Tail will occur is 1-p. The event
‘H’ or ‘T’ on each trial are independent events, in the sense that
whether the outcomes is H or T on any trial is independent of the
chance of ‘Head’ or ‘Tail’ on any previous or subsequent trials. If A
and B are independent events, the probability of observing A and B
equals the probability of A multiplied by the probability of B.
Therefore, the probability of observing 1,0,0,1 together is :
 211 1pp p p p2)The Binomial Random Variable :
A binomial distribution can b e thought of as simply the
probability of a success or failure outcome in an experiment or
survey that is repeated multiple times. It has only two possible
outcome (the prefix “bi” means two) for e.g. a coin toss has only
two outcomes heads or fails or tak ing a test could have two
outcomes pass or fail.munotes.in

Page 7

7A binomial random variable is the number of successes X in
nr e p e a t e dt r i a l so fab i n o m i a le x p e r i m e n t .T h ep r o b a b i l i t y
distribution of a binomial random variable is called a binomial
distribution.
For a variable to be classified as a binomial random variable,
the following conditions must be satisfied:
1)There must be a fixed sample size (a certain number of trials)
2)For each trial, the success must either happen or it must not.
3)The probabilit y for each event must be exactly the same.
4)Each trial must be an independent event.
The binomial probability refers to the probability that a
binomial experiment results in exactly X successes. Given,&Xn p, we can compute the binomial probability based on the
binomial formula :
Suppose a binomial consists of n trials & results in X
successes and if the probability of success on an individual trial is
P, then the binomial probability is :
  ,, 1nxxb X n p C XP X p nr 
OR
  ,, ! / ! ! 1nxxbXn p X n x PX p n    
WhereXThe number of successes that result from the binomial
experiment.nThe number of trialspThe probability of success on an individual tri al
QThe probability of failure on an individual trial1Cp!nThe factoral of n.,, bXn pbinomial probability
Cnrthe number of combinat ions of n things, taken r at a time.
For e.g. Suppose a die is tossed or 5 times. What is the
probability of getting exactly 2 fours?
Solution :
This is a binomial experiment in which the number of trials is
equal to 5, the number of successes is equal to 2 and the
probability of success on a single trial is 1/6 or about 0.167.
Therefore, the binomial probability is :munotes.in

Page 8

8
232;5,0.167 0.167 0.83352
2;5,0.167 0.161bC
b 

3)The Poisson Distribution :
A poisson distribution is the discrete probability distribution
that resul ts from a Poisson experiment. It has the following
properties.The experiment results in outcomes as successes or failures.The average number of successesthat occurs i na
specified known region.The probability that a success will occur is proportional to the
size of the region.The probability that the success will occur in an extremely small
region is virtually zero.
For e.g. A certain restaurant gets an average of 4 customers
per minute for takeaway orders. In this case, a poisson distribution
can be used to analyze the probability of various events, regarding
total number of customers visiting for takeaway orders. It helps a
manager of the restaurant to plan for such events with staffing &
scheduling.
Likewise the poisson distribution can also be applied in
subjects like biology, disaster management, finance where the
events are time dependent.
AP o i sson random variable is the number of success that
result from a Poisson experiment. The probability distribution of a
Poisson random variable is called a Poisson distribution.
Suppose the average number of successes within a given
region is, then the Poisson probability is :
,!xe
PXX

Where e : a constant equal to approximately 2.71828 (e is the
base of the natural logarithm system):the mean number of successes that occur in a specified
region.
X: the actual number of successes that occur in a
specified region.,PX:The Poisson probabilitymunotes.in

Page 9

9For e.g.
The average number of high end cars are sold by the dealer
of a Luxury Mot or Company is 2 cars per day. What is the
probability that exactly 3 high end cars will be sold tomorrow?
Solution : We have the values of= 2, the average number of high end car are sold per day
X=3 ,p r o b a b i l i t yt h a t3h i g h end cars will be sold tomorrow
e = 2.71828, a constant
By using the Poisson Formula we get :


,!232.71828 23; 23!
0.13534 83; 26
3;2 0.180xe
PXX
P
P
P






Thus, the probability of selling 3 high end cars by tomorrow
is 0.180.
1.7 CONTINUOUS DISTRIBUTION : THE NORMAL
DISTRIBUTION
The normal distribution refers to a family of continuous
probability distribution. It is also known as the Goussian distribution
& the bell curve. It is a probability function that describes how the
values of a variable are distributed. It is a symmetric distribution
where most of the observations cluster around the central peak and
the probabilities for values further away from the mean taper off
equally in both directions. Extreme values in both tails of the
distribution are similarly unlikely.
The normal equation for the normal distribution when the
value of the random variable X is :
221 / 2 exp / 2 fX x  Xnormal random variable,Xmeanstandard deviationapproximately 3.14159munotes.in

Page 10

10eapproximately 2.71828
The normal equation is the probability density function for
the normal distribution.
The Normal curve :The normal distribution d epends on two factors
-the mean and the standard deviation. The mean of the distribution
determines the location of the centre of the graph, and the standard
deviation determines the height and width of the graph. All normal
distributions look like a symm etric, bell -shaped curve as show
below :
Figure No. 1.1
When the standard deviation is small, the curve is tall and
narrow and when the standard deviation is big, the curve is short
and wide.
Probability and Normal curve
The normal distributio n is a continuous probability
distribution, wherethe total area under the normal curve is equal to 1the probability that a normal random variable X equals any
particular value is 0the probability that X is greater than a equals the area under the
normal curve bounded by a and plus infinity (indicated by non -
shaded area in the figure below)the probability that X is less than a equals the area under the
normal curve bounded by a and minus infinity (indicated by the
shaded area in the figure below)munotes.in

Page 11

11Figure No. 1.2
There are some important features of the normal distribution
as follows :
1.The distribution is symmetrical about the mean, w hich equals
the median and the mode.
2.About 68% of the area under the curve falls within 1 standard
deviation of the mean.
3.About 95% of the area under the curve falls within 2 standard
deviation of the mean.
4.About 99.7% of the area under the curve falls within 3 standard
deviations of the mean.
These last 3 points are collectively known as the empirical
rule or the 68 -95-99.7 rules. Let us discuss it with an example of
an express food delivery by a Restaurant. Assuming that a mean
delivery time of 30 minutes and a standard deviation of 5 minutes.
Using the Empirical Rule. We can determine that 68% of the
delivery times are between 25 -35 minutes (30 + / -5), 95% are
between 20 -40 minutes (30 + / -2x5 ) ,9 9 . 7 %a r eb e t w e e n1 5 -
45 minutes (30 + / -3x5 ) .
Suppose, an average tub elight manufactured by ABC
Corporation lasts 300 days with a standard deviation of 50 days.
Assuming that tubelight life is normally distributed, what is the
probability that ABC corporation’s tubelight will last at mo st 365
days?
Solution : Given a mean score of 300 days & a standard deviation
of 50 days, we want to find the cumulative probability that tubelight
life is less than or equal to 365 days. Thus,munotes.in

Page 12

12the value of the normal random v ariable is 365 days.the mean is equal to 300 days.the standard deviation is equal to 50 days.
By entering these values to find out cumulative probability
we get365 0.90PX 
Hence, there is a 90% chance that a tubelight will burn out
within 365 days.
1.8REFERENCE
S-Shyamala and Navdeep Kaur, ‘Introduce too y Econometrics’.
Neeraj R, Hatekar, ‘Principles of Econometrics : An Introduction
us in, R’

munotes.in

Page 13

132
COVARIANCE AND CORRELATION
Unit Structure :
2.0 Objectives
2.1 Introduction
2.2 Covariance
2.3 Correlation Analysis
2.4Methods of Studying Correlation
2.5 The Law of Large Numbers
2.6References
2.0OBJECTIVES
After going to this mod ule you will be able :
To understand the meani ng of covariance and correlation.
To understand the method of studying correlation.
To understand the law of large numbers.
2.1INTRODUCTION
Covariance is a measure used to determine how much two random
variables differ by its respective mean and Correlation is a
statistical method which helps in analysing the relationship between
two or more variables. The value of the covariance coefficient lies
betweenandand the value of correlation coefficient lies
between -1a n d+ 1 .
2.2COVARIANCE
Covariance is a measure used to determine how much two random
variables differ by its respective mean. In other words, the prefix
‘Co’ refers to a joint action and varia nce refers to the change. In
covariance, two variables are related based on how these variables
change in relation with each other. The value of the covariance
coefficient lies betweenand.
For popula tion,
  
1,nXX Y Yii
iCOV X Yn
munotes.in

Page 14

14Where,XYtwo random variables
Xmean of random variable X
Ymean of random variable Ynlength of random variab le X, Y
For sample
  
1,1nXX Y Yii
iCOV X Yn

&XYmean of given sample setntotal number of sampleX and Yiiindividual sample of set
2.3CORRELATION ANALYSIS
Correlation is a statistical method which helps in analyzing
the relationship between two or more variables. The study of
correlation is useful due to following reasons:
1) Since most of the variables have some kind of relationship,
quantification of it is necessary to learn more about them.
2) Correlation is a first step towards estimation or prediction of
unknown values of the variables.
3) An understanding of the degree and nature of correlation
between two or more variables helps in reducing uncertainties
about the economic behaviour of important variables like price level
and money supply, interest rate and investment, taxation and
willingness to work, etc.
Correlation is classified into three ways:
1)Positive and Negative correlation (Depends upon the
direct ionof change) : When both the variables change in the
same direction, (i.e. they increase or decrease together) it is
positive correlation. For example when price rises, supply also
increases, when income falls, consumption also declines. When
increase in one variable is accompanied by a fall in other, it is
negative correlation. For example, increase in price leads to fall in
demand; increase in interest rate is accompanied by a fall in
investment.
2) Simple and Multiple correlation (Depends upon number of
variables under study) : Simple correlation is the relationshipmunotes.in

Page 15

15between two variables like height and weight of a person, or wage
rate and employment in the economy. Multiple correlations ,on the
other hand, examines relationship between three or more variables.
For example a relationship between production of rice per acre,
rainfall and use of fertilizers is multiple in nature.
3) Linear and non -linear (Depends on the ratio of change
between two variables) : When a change in one variable is in
constant ratio with a change in other, it is linear relationship. For
example doubling the amount of fertilizers used exactly doubles the
yield per acre, it is linear relationship. Non -linear relationship exists
when a change in one variable is not in constant rat io with a
change in other. In this case doubling the amount of fertilizers may
not exactly double the output per acre.
2.4METHODS OF STUDYING CORRELATION
Following important method of studying correlation between
twovariable will be discussed in th is unit.
Scatter diagram method.
Karl Pearson ‟s Coefficient of Correlation.
Rank Correlation Coefficient.
2.4.1Scatter diagram
It is the simplest method of studying correlation, by using
graphical method. Under this method, a given data about two
variables is plotted in terms of dots. By looking at the spread or
scatter of these dots, a quick idea about the degree and nature of
correlation between the two variables can be obtained. Greater the
spread of the plotted points, lesser is an association betw een two
variables. That is, if the two variable are closely related, the scatter
of the points representing them will be less and vice versa.
Following are different scatter diagrams explaining the
correlation of different degrees and directions.
munotes.in

Page 16

16
1)Figure 1 represents positive perfect correlation where coefficient
of correlation (r) = 1.
2)Figure 2 represents perfect negative correlation where
coefficient of correlation (r) = -1
3)Figure 3 indicates high degree positive correlation where r=+
0.5 or more.
4)Figure 4 indicates high degree negative correlation where r=-
0.5 or more.
5)Figure 5 represents low degree positive correlation where the
scatter of the points is more.
6)Figure 6 represents low degree negative correlation where the
scatter for the points is more in negative direction.
7)Figure 7 indicates that there is no correlation between two
variables. Here r = 0.
Thus, the closeness and direction of points representing the
values of two variables determine the correlatio nb e t w e e nt h e
same.
Advantages and Limitations of this method.
It is a simple method giving very quick idea about the nature of
correlation.
It does not involve any mathematical calculations.
It is not influenced by the extreme values of variables.
This m ethod, however, does not give exact value ofmunotes.in

Page 17

17coefficient of correlation and hence is less useful for further
statistical treatment.
2.4.2Karl Pearson ’s Coefficient of Correlation (r) :
This is the most widely used method of studying a bi -variate
correl ation. Under this method, value of r can be obtained by using
any of the following three ways.
I) Direct Method of finding correlation coefficient
Ex.1 Calculate Karl Pearson ‟s coefficient of correlation using direct
method.
munotes.in

Page 18

18
Ex. 2 Calculate Karl Pearson ‟s coefficient of correlation by taking
deviations from actual mean.
munotes.in

Page 19

19
Ex.3 Compute Karl Pearson ‟s coefficient of correlation by taking
deviations from assumed mean.
(This method is used when the actual means are in fractions)
For the above data, actual meansXandYwill be in fraction. So
we can take assumed means for both the variables and then find
thedeviations dx and dy .
Let assumed means for X = 9
Let assumed mean for Y = 29
munotes.in

Page 20

20
Since r = 0.89, there is high degree positive correlation between
Xand Y.
Check your progress
1) Find correlation coefficient for the following data.
3)
2.4.3Rank Correlation:
For certain categories like beauty, honestry, etc quantitative
measurement is not possible. Also sometimes the population under
study may not be normally distributed. In such cases, instead of
Karl Pearson ‟s co -efficient of correlatio n, Spearman ‟sR a n k
correlation coefficient is calculated. This method is used to the
determine the level of agreement or disagreement between two
judges. The calculations involved in this method are much simplermunotes.in

Page 21

21than the earlier method. Rank correlation is calculated using the
following formula.
Rank correlation is computed in following two ways:
1) When ranks are given.
2) When ranks are not given.
Rank correlation when ranks are given:
Ex.4 Following are the ranks given by two judges in a beauty
contest. Find rank correlation coefficient.
Since rank correlation co -efficient is -0.5, there is a moderate
negative correlation between the ranking by two judges.
Calculation of rank correlation co -efficient, when the ranks
are not given:
Ex.4 Cal culate rank correlation for the following data.munotes.in

Page 22

22
When the ranks are not given, we have to assign ranks to
the given data. The ranks can be assigned in ascending (Rank 1 to
the lowest value) or descending (Rank 1 to the highest value)
order.
In this e xample, ranks are given in descending order.
The highest value gets rank 1 and so one.
Since rank correlation coefficient is -0.167, the relationship
between Xa n dYi sl o wd e g r e en e g a t i v e .
Check your progress
Find rank correlation coefficient for th ef o l l o w i n gd a t a
1)
2)
munotes.in

Page 23

232.5THE LAW OF LARGE NUMBERS
The law of large numbers is one of the most important
theorems in probability theory. It stales that, as a probabilistic
process is repeated a large number of times, the relative
frequencies of its possible outcomes will get closer and closer to
their respective probabilities. The law demonstrates and proves the
fundamental relationship between the concepts of probability and
frequency.
In 1713, Swiss mathematician Jakob Bernoulli proved this
theorem in this book. It was later refined by other noted
mathematicians, such as Pafnuty Chebyshev.
The law of large numbers shows that if you take an
unpredictable experiment & repeat it enough times, you will end up
with its average. In technical te rms, if you have repeated,
independent trials, with a probability of success P for each trial, the
percentage of successes that differ from P converge to 0 as the
number of trials n tends to infinity. In more simple words, if you
repeated an experiments ma ny times you will start to see a pattern
and you will be able to figure out probabilities.
For e.g. throw a die and then we will get a random number
(1, 2, 3, 4, 5, 6). If we throw if for 100,000 times and we will get an
average of 3.5 -which is the ex pected value.
Another example is of tossing a coin 1, 2, 4, 10, etc. times,
the relative frequency of heads can easily happen to be away from
the expected 50%. That is because 1, 2, 4, 10,… are al small
number. On the other hand, if we tossed a coin for 1000 or 100000
times, then the relative frequency will be very close to 50% since
1000 and 100000 are large numbers.
Weak Law of large numbers :
The Law of Large number is sometimes called the Weak
Law of Large numbers to distinguish it from the Stron gL a wo f
Large numbers. The two versions of the Law are different
depending on the mode of convergence. The weak law is weaker
than the sample mean converges to the expected mean in mean
square and in probability. The strong law of large numbers is where
the sample mean M converges to the expected meanwith
probability.
2.6REFERENCE
S-Shyamala and Navdeep Kaur, ‘Introduce too y Econometrics’.
Neeraj R, Hatekar, ‘Principles of Econometrics : An Introduction
us in, R’
munotes.in

Page 24

24MODULE 2
3
TEST OF HYPOTHESIS : BASIC
CONCEPTS AND PROCEDURE
Unit Structure :
3.0 Objectives
3.1 Introduction
3.2 Hypothesis Testing
3.3 Basic Concepts in Hypothesis Testing
3.4 Procession of Hypotheses Testing
3.5 Procedure for Testing of Hypotheses
3.6 Reference
3.0 OBJECTIVES
To understand the meaning of hypothesis testing.
To understand the basic concepts of hypothesis testing.
To understand the procession and procedure of hypothesis
testing.
3.1 INTRODUCTION
Hypothesis is the proposed assu mption explanation,
supposition or solution to be proved or disproved. It is considered
as main instrument in research. It stands for the midpoint in the
research. If hypothesis is not formulated researcher cannot
progress effectively. The main task in res earch is to test its record
with facts. If hypothesis is proved the solution can be formed and if
it is not proved then alternative hypotheses needs to be formulated
and tested.
So, with hypothesis formulated it will help up to decide the
type of data r equire to be collected.
The important function in research is formulation of
hypothesis. The entire research activity is directed towards making
of hypothesis. Research can begin with well formulated hypothesis
or if may be the end product in research w ork. Hypothesis gives us
guidelines for an investigation to the basis of previous available
information. In absence of this research will called underquired datamunotes.in

Page 25

25and may eliminate required one. Thus hypothesis is an assumption
which can be put to test to d ecide its validity.
3.2HYPOTHESIS TESTING
In business research and social science research, different
approaches are used to study variety issues. This type of research
may be format or informal, all research begins with generalized
idea in form of hy pothesis. A research question is usually there. In
the beginning research effort are made for area of study or it may
take form of question abut relationship between two or more
variable. For example do good working conditions improve
employee productivity or another question might be now working
conditions influence the employees work.
3.3BASIC CONCEPTS IN HYPOTHESIS TESTING
Basic concepts in the context of testing of hypotheses need
to be explained. Those are:
3.3.1Null and Alternative hypotheses:
In the context of statistical analysis, we often talk about null
hypothesis and alternative hypothesis. If we are to compare method
A with method B about its superiority and if we proceed on the
assumption that both methods are equally good, then this
assumption is termed as the null hypothesis. As against this, we
may think that the method A is superior or the method B is inferior,
we are then stating what is termed as alternative hypothesis. The
null hypothesis is generally symbolized as H0 and the alte rnative
hypothesis as Ha. Suppose we want to test the hypothesis that the
population mean ( μ) is equal to the hypothesized mean ( μH0) =
100. Then we would say that the null hypothesis is that the
population mean is equal to the hypothesized mean 100 and
symbolically we can express as:
100 : 0 0 H H
If our sample results do no t support this null hypothesis; we
should conclude that something else is true. What we conclude
rejecting the null hypothesis is known as alternative hypothesis. In
other words, the set of alternatives to the null hypothesis is referred
to as the alternat ive hypothesis. If we accept H0, then we are
rejecting Ha and if we reject H0, then we are accepting Ha. For
100: 0 0 H H , we may consider three possible alternative
hypotheses as follows:
If a hypothesis is of the type 0 H , then we call such a
hypoth esis as simple (for specific) hypothesis but if it is of the typemunotes.in

Page 26

260 H or 0 H or 0 H then we call it a composite (or nonspecific)
hypothesis.
The null hypothesis and the alternative hypothesis are chose
before the sample is drawn (the researcher must avoid the error of
deriving hypotheses from the data that he collects and then testing
the hypotheses from the same data.) In the choice of null
hypothesis, the following considerations are usually kept in view:
1) Alternative hypothesis is usually the one which one wishes to
prove and the null hypothesis is the one which one wishes to
disprove. Thus, a null hypothesis represents the hypothesis we are
trying to reject and alternative hypothesis represents all other
possibilities.
2) If the rejection of a c ertain hypothesis when it is actually true
involves great risk, it is taken as null hypothesis because then the
probability of rejecting it when it is true is a (the level of
significance) which is chosen very small.
3) Null hypothesis should always be sp ecific hypothesis i.e., it
should not state about or approximately a certain value.
Generally, in hypothesis testing we proceed on the basis of
null hypothesis, keeping the alternative hypothesis in view. Why
so? The answer is that on the assumption th at null hypothesis is
true, one can assign the probabilities to different possible sample
results, but this cannot be done if we proceed with the alternative
hypothesis. Hence, the use of null hypothesis (at times also known
as statistical hypothesis) is q uite frequent.
3.3.2Parameter and Statistic:
The main objective of sampling is to draw inference about
the characteristics of the population on the basis of a study made
on the units of a sample. The statistical measures calculated from
the numerical d ata obtained from population units are known as
Parameters. Thus, a parameter may be defined as a characteristic
of a population based on all the units of the population. While the
statistical measures calculated from the numerical data obtainedmunotes.in

Page 27

27from sampl e units are known as Statistics. Thus a statistic may be
defined as a statistical measure of sample observation and as such
it is a function of sample observations. If the sample observations
are denoted by x1, x2, x3, ………, xn. Then, a statistic T may be
expressed as T = f (x1, x2, x3, ………, xn).
3.3.3Type I and Type II errors:
In the context of testing of hypothesis, there are basically
two types of errors we can make. We may reject H0 when H0 is
true and we may accept H0 when in fact H0 is not true. The former
is known as Type I error and the latter as Type II error. In other
words, Type I error means rejection of hypothesis which should
have been accepted and Type II error means accepting the
hypothesis which should have been rejected. Type I error is
denoted by (alpha) known as error, also called the level of
significance of test; and Type II error is denoted by (beta) known as
error. In a tabular form the said two errors can be presented as
follows:
The probability of Type I error is usually determined in
advance and is understood as the level of significance of testing the
hypothesis. If type I error is fixed at 5 per cent, it means that there
are about 5 chances in 100 that we will reject H0 when H0 is true.
We can control Type I error j ust by fixing it at a lower level.
For instance, if we fix it at 1 per cent, we will say that the maximum
probability of committing Type I error would only be 0.01.
But with a fixed sample size, n, when we try to reduce Type I
error, the probability of committing Type II error increases. Both
types of errors cannot be reduced simultaneously. There is a trade -
off between these two types of errors which means that the
probability of making one type of error can only be reduced if we
are willing to increas e the probability of making the other type of
error. To deal with this trade -off in business situations, decision
makers decide the appropriate level of Type I error by examiningmunotes.in

Page 28

28the costs or penalties attached to both types of errors. If Type I
error invo lves the time and trouble of reworking a batch of
chemicals that should have been accepted, whereas Type II error
means taking a chance that an entire group of users of this
chemical compound will be poisoned, then in such a situation one
should prefer a T ype I error to a Type II error. As a result one must
set very high level for Type I error in one's testing technique of a
given hypothesis. Hence, in the testing of hypothesis, one must
make all possible effort to strike an adequate balance between
Type I and Type II errors.
3.3.4The level of significance:
It is a very important concept in the context of hypothesis
testing. We reject a null hypothesis on the basis of the results
obtained from the sample. When is such a rejection justifiable?
Obvious ly, when it is not a chance outcome. Statisticians
generally consider that an event is improbable, only if it is among
the extreme 5 per cent or 1 per cent of the possible outcomes. To
illustrate, supposing we are studying the problem of non attendance
inlecture among college students. Then, the entire number of
college students is our population and the number is very large.
The study is conducted by selecting a sample from this population
and it gives some result (outcome). Now, it is possible to draw a
large number of different samples of a given size from this
population and each sample will give some result called statistic.
These statistics have a probability distribution if the sampling is
based on probability. The distribution of statistic is called a
‘sampling distribution ’. This distribution is normal, if the population
is normal and sample size is large i.e. greater than 30. When we
reject a null hypothesis at say 5 per cent level, it implies that only 5
per cent of sample values are extreme or hi ghly improbable and our
results are probable to the extent of 95 per cent (i.e. 1 –.05 = 0.95).
Figure No. 3 .1
For example, above Figure shows a normal probability
curve. The total area under this curve is one. The shaded areas at
both extremes show the improbable outcomes. This area togethermunotes.in

Page 29

29is 0.05 or 5 per cent. It is called the region of rejection. The other
area is the acceptance region. The percentage that divides the
entire area into region of rejection and region of acceptance is
called the l evel of significance. The acceptance region, which is
0.95 or 95 per cent of the total area, is called the level of
confidence. These are probability levels. The level indicates the
confidence with which the null hypothesis is rejected. It is common
to use 1 per cent or 5 per cent levels of significance. Thus, the
decision rule is specified in terms of a specific level of significance.
If the sample result falls within the specified region of rejection, the
null hypothesis is rejected at that level of signi ficance. It implies that
there is only a specified chance or probability (say, 1 per cent or 5
per cent) that we are rejecting H0, even when it is true. i.e. a
researcher is taking the risk of rejecting a true hypothesis with a
probability 0.05 or 0.01 onl y. The level of significance is usually
determined in advance of testing the hypothesis.
3.3.5Critical region:
As shown in the above figure, the shaded areas at both
extremes called the Critical Region, because this is the region of
rejection of the nu ll hypothesis H0, according to the testing
procedure specified.
Check your progress:
1.Which basic concepts regarding hypothesis testing have you
studied?
2.Define:
i.Null Hypothesis
ii.Alternative Hypothesis
3.What do you mean by parameter and s tatistic?
4.What are the Type I and Type II errors?
5.What are level of significance and level of confidence?
6.What is Critical Region?
3.4PROCESSION OF HYPOTHESES TESTING:
Hypotheses testing is a systematic method. It is used to
evaluate the data collected. This serve as aid in the process of
decision making, the testing of hypotheses conducted through
several steps which are given below.munotes.in

Page 30

30a.State the hypotheses of interest
b.Determine the appropriate test statistic
c.Specify t he level of statistical significance.
d.Determine the decision rule for rejecting or not rejecting null
hypotheses.
e.Collect the data and perform the needed calculations.
f.Decide to reject or not to reject the null hypotheses.
In order to provid e more details on the above steps in the
process of hypotheses testing each of test will be explained here
with suitable example to make steps easy to understand.
1.Stating the Hypotheses
In statistical analysis of any research study if includes at
least two hypotheses one is null hypotheses and another is
alternative hypotheses.
The hypotheses being tested is referred as the null
hypotheses and it is designated as HO. It is also referred as
hypotheses of difference. It should include a statement whi ch has
to be proved wrong.
The alternative hypotheses present the alternative to null
hypotheses. It includes the statement of inequality. The null
hypotheses are and alternative hypotheses are complimentary.
The null hypothesis is the statement that is believed to be
correct through analysis which is based on this null hypotheses. For
example, the null hypotheses might state the average are for
entering management institute is 20 years. So average age for
institute entry = 20 years
2. Determining App ropriate Test Statistic
The appropriate test statistic which is to be used in statistic,
which is to be used in statistical hypotheses testing, is based on
various characteristics of sample population of interest including
sample size and distribution.
The test statistic can assume many numerical values. As the
value of test statistic has significant on decision one must use the
appropriate statistic in order to obtain meaningful results. The
formula to be used while testing population means is.
Z-test statistic, x -mean of sample-mean of population,-standard deviation, n –number of sample.
3. The Significance Level
As already explain, null hypothesis can be rejected or fail to
reject null hypotheses. A null hypothesis that is rejected may in
really be true or false.munotes.in

Page 31

31A null hypothesis that fails to be rejected may in reality be
true or false. The outcome that a researcher desires is to reject
false null hypotheses or fail to reject t rue null hypotheses. However
there is always possibility of rejecting a true hypotheses or failing to
reject false hypotheses.
Type I and Type II Errors
Type I: error is rejecting a null hypotheses that is true
Type II: Error is failing to rejected a fal se null hypotheses
The probability of committing a type I error is termed as a A
and Type II error is termed as B.
4. Decision Rule
Before collection and analyses of data it is necessary to
decide under which conditions the null hy7potheses will b e rejected
or fail to he rejected. The decision rule can be stated in terms of
computed test statistics or in probabilistic terms. The same decision
will he applicable any of the method so selected.
5. Data Collection and Calculation Performance
In rese arch process at early stage method of data collection
is decided. Once the research problem is decided that immediately
decision in respect of type and sources of data should be taken. It
must clear that fact that, which type of data will be needed for the
purpose of the study and now researcher has a plan to collect
required data.
The decision will provide base for processing and analysing
of data. It is advisable to make use of approved methods of
research for collecting and analysing of data.
6. Deci sion on Null Hypotheses
The decision regarding null hypotheses in an important step
in the process of the decision rule.
Under the said decision rule one has to reject or fail to reject
the null hypotheses. If null hypotheses is rejected than alterna tive
hypotheses can be accepted. If one fails to reject null hypotheses
one can only suggest that null hypotheses may be true.munotes.in

Page 32

327. Two Failed and One Failed Tests
In the case of testing of hypotheses, above referred both
terms are quite important and th ey must be clearly understood. A
two failed test rejects the null hypotheses.
a.if sample mean is significantly
b.higher or lower than the
c.hypothesized value of mean of the population
d.such a test is appropriate, when the null hypotheses is som e
specified value and the alternate hypotheses is a value not
equal to the specified value and the alternative hypotheses is
value not equal to the specified value of null hypotheses.
3.5PROCEDURE FOR TESTING OF HYPOTHESES:
Testing of hypotheses mean to decide the validity of the
hypotheses on the basis of the data collected by researcher. In
testing procedure we have to decide weather null hypotheses is
accepted or not accepted.
This requirement conducted through several steps between
the cause of two action i.e. relation or acceptance of null
hypothesis. The steps involved in testing of hypotheses are given
below.
1.Setting up of Hypotheses
This step consist of hypotheses setting. In this step format
statement in relation to hypotheses in made. In traditional practice
instead of one, two hypotheses are set. In case if one hypotheses
is rejected than other hypotheses is accepted. Hypotheses should
be clearly stated in respect of the nature of the research problem.
There are hypotheses are.
a.Null hypotheses and
b.Alternative hypotheses.
Acceptance or rejection of hypotheses is based on the
sampling information. Any sample which we draw from the
population will vary from it therefore it is necessary to judge
whether there difference are stat istically significant or insignificant.
The formulation of hypotheses is an important step which
must be accomplished and necessary care should be taken as per
the requirement and object of the research problem under
construction.
This should also s pecify the whether one failed or two failed
test will be used.munotes.in

Page 33

332.Selecting Statistical Technique
In this stage we will make selection of statistical technique
which are going to he used. There are various statistical test which
are being used in testin g of hypotheses. There tests are
Z–Test
T–Test
F–Test
X2
It is the job of the researcher to make proper selection of the
test.
Z-Test is used when hypotheses is related to a large
sample. (30 or more)
T-Test is used when hypotheses is related to small sample
(Less than 30)
The selection of test will be dependent on various
consideration like, variable involved, sample size, type of data and
whether samples are related or independent.
3.Selecting Level of Significance
This stag e consists of making selection of desired level of
significance. The researcher should specify level of significance
because testing of hypotheses is based on pre -determined level of
significance. The rejection or retention of hypothesis by the
researcher is also based on the significance level.
The level of significance is generally expressed in
percentage from such as 5% or 1%, 5% level of significance is
accepted by the researcher, it means he will be making wrong
decision about 5% of time. In case if hypotheses is reject at this
level of 5% he will be entering risk hypotheses rejection ???out of
100 occasions.
The following factors may affect the level of significance.
-The magnitude difference between sample mean
-The size of sample
-The valid ity of measurement
4.Determining Sampling Distribution
The next step after deciding significance level in testing of
hypothesis is to determine the appropriate sampling distribution. It
is, normal distribution and ‘t’ –distribution in which choice can be
excised.munotes.in

Page 34

345.Selecting Sample and Value
In this step random sample is selected and appropriate value
is computed from the sample data relating to the test statistic by
utilizing the relevant distribution.
6.Performance Computation
In this step calculation of performance is done. The
calculation includes testing statistics and standard error.
A hypothesis is tested for the following four possibilities, that
the hypotheses is
a-True, but test lead to its rejection
b-False, but test lead to its acceptance
c-True, but test lead to its acceptance
d-False, but test lead to its rejection
Out of the above four possibilities a and b lends to wrong
decision. In this case a lends to Type I error and, b lends to Type II
error.
7.Statistical D ecision
Thus is the step in which we have to draw statistical decision
involving the acceptance or rejection of hypotheses.
This will be dependent on whether the calculated value of
the test falls in the region of acceptance or in the region of rejecti on
at given significance level.
If hypotheses is tested at 5% level and observed set
recorded the possibilities less than 5% level than we considered
difference between hypothetical parameter and sample statistics is
significant.
3.6REFERENCE
S-Shyamala and Navdeep Kaur, ‘Introduce too y Econometrics’.
Neeraj R, Hatekar, ‘Principles of Econometrics : An Introduction
us in, R’
munotes.in

Page 35

354
TEST OF HYPOTHESIS :V A R I O U S
DISTRIBUTION TEST
Unit Structure :
4.0 Objectives
4.1 Introduction
4.2 Testing of Hypotheses using various distribution test
4.3 Standardization: Calculating Z -scores
4.4 Uses of t -Test
4.5 F-Test
4.6 Chi-square Test
4.7 Reference
4.0 OBJECTIVES
To understand the various distribution tests of hypothesis
testing.
To understand the uses of t –test.
To understand the uses of F test and Chi -square test.
4.1 INTRODUCTION
The test of significance used f or hypothesis testing is of two types
the parametric and non -parametric test.
The parametric test is more powerful, but they depend on
the parameters or characteristics of the population. They are based
on the following assumptions.
1.The observations or values must be independent.
2.The population from which the sample is drawn on a random
basis should be normally distributed.
3.The population should have equal variances.
4.The data should be measured at least at interval level so that
arithmeti co p e r a t i o n sc a nb eu s e d .munotes.in

Page 36

364.2 TESTING OF HYPOTHESIS USING VARIOUS
DISTRIBUTION TEST
A.The Parametric Tests:
a)The Z –Test
Prof. R.A. fisher has develop the Z Test. It is based on the
normal distribution. It is widely used for testing the signifi cance of
several statistics such as mean, median, mode, coefficient of
correlation and others. This test is used even when binominal
distribution or t distribution is applicable on the presumption that
such a distribution lends to approximate normal distri bution as the
sample size (n) become larger.
b)The T –Test
The T –Test was developed by W.S. Gossel around 1915
since he published his finding under a bon name ‘student’, it is
known as student’s t –test. It is suitable for testing the significance
of a sample man or for judging the significance of difference
between the mams of two samples, when the samples are less
than 30 in number and when the population variance is not known.
When two samples are related, the paired t –test is used. The t –
test can also be used for testing the significance of the coefficient of
simple and partial correlation.
In determining whether the mean of a sample drawn from a
normal population deviates significantly from a stated value when
variance of population is u nknown, we calculate the statistic.
Where,xthe mean of samplethe actually or hypothetical mean of populationnthe sample size
s = standard deviation of the sampl es
Example
Ten oil tins are taken at random from an automatic filling
machine the mean weight of the 10 tins is 15.8 kg and standard
deviation 0.5 kg. Does the sample mean differ significantly from the
intended weight of 16 kg?
(given for,t o0 . 0 5 –2.26)munotes.in

Page 37

37Solution:
Let us make the hypothesis tat the sample mean does not
differ significantly from the intended weight of 16 kg applying t –
test .
For
The calculated value of t is less than the table value. The
hypothe sis is accepted.
3.The f -test
The f –test is based on f –distribution (which is a
distribution skewed to the right, and tends to be more symmetrical,
as the number of degrees of freedom in the numerator and
denominator increases)
The f -test is u sed to compare the variances of two
independent sample means at a time. It is also used for judging the
significance of multiple correlation coefficients.
BThe Non -parametric Tests
The non -parametric tests are population free tests, as they
are not ba sed on the characteristics of population. They do not
specify normally distributed population or equal variances. They are
easy to understand and to use.
The important non parametric tests are:
-The chi -square test
-The median test
-The Mann -Whitney U test
-The sign test
-The Wilcoxin matched –Paris test
-The Kolmogorow Smornov test.munotes.in

Page 38

38The Chi -Square Test (x2)
The Chi -Square test is the most popular non -parametric test
of significance in social science research. It is used to make
comparison s between two or more nominal variables. Unlike the
other test of significance, the chi -square is used to make
comparisons between frequencies rather than between mean
scares. This test evaluated whether the difference between the
observed frequencies and the expected frequencies under the null
hypothesis can be attributed to chance or actual population
differences. A chi -square value is obtained by formula.
Where,2kchi-squarefAobserved or actual fre quencyfeexpected frequency2kcan also determined with the help of the following formula.
N = total of frequencies
Example,
Weight of 7 persons is given as below:
In this information we can s ay, variance of distribution of
sample of 7 persons was drawn is equal to weight of 30 kg.
Test this at 5% of 1% level of significance.munotes.in

Page 39

39Solution:
Above information we will workout variance of sample data.
Degree of freedom is (n -1) = (7 -1)=6
At 5% Ye level of significance x2=12.592
1% level = 16.812
Value are greater than x2=8.6
So we accept null hypotheses and variance at both 5 and 1
pe level is significant. So sample of 30 kg is taken from the
population.
The standard no rmal distribution and its application :
Normal distributions do not necessarily have the same
means and standard deviations. A normal distribution with a mean
of0 and a standard deviation of 1 is called a standard normal
distribution. It is centred atzero and the degree to which a given
measurement deviates from the mean is given by the standard
deviation. This distribution is also known as the Z -distribution.
A value on the standard normal distribution is known as a
standard deviations above or b elow the mean that specific
observation falls. For example, a standard score of 1.5 indicates
that the observation is 1.5 standard deviation above the mean. On
the other hand, a negative score represents a value below the
average. The mean has a Z -score of 0.munotes.in

Page 40

404.3STANDARDIZATION : CALCULATING Z-scores
The process of standardization allows to compare
observations and calculate probabilities across different
populations. i.e. it allows to take observations drawn from normally
distributed populations thath a v ed i f f e r e n tm e a n sa n ds t a n d a r d
deviations and place then on a standard scale. To standardize the
data, we need to convert the raw measurements into Z -scores.
To calculate the standard score for an observation, following
formula can be used.
XZXraw value of the measurement of interestandparameters for the population from which the
observations is drawn.
Let us discuss it with an example of mangoes and Apples.
Let’s compare their weights. Mangoes weights 110 grams and an
Apples weights 100 grams. By comparing mere their raw value we
can observe that the mango weights more than the Apple. Now we
will compare their standard score s. Assuming that the weights of
mangoes and Apples follow a normal distribution with the following
parameter values :
Mangoes Apples
Mean weight grams 100 140
Standard deviation 15 25
We will use these value to get Z -score :
Mangoes = (110 -100) / 15 = 0.667
Apples = (100 -140) / 25 = -1.6
The Z -score for the Mangoes is (0.667) positive which
means that Mangoes weight more than the average Apple. It is not
an extreme value by any means, but it is above average for
mangoes. On the other h and the Apples has fairly negative Z -
score ( -1.6). It is much below the mean weight for apples.
To find areas under the curve of a normal distribution for it,
we will use Z -score table.
Let’s take the Z -score for mango (0.667) and use it to
determine its weight percentile. A percentile is a proportion of a
population that falls below a specific value. To determine percentile,munotes.in

Page 41

41we need to find the area that corresponds to the range of Z scores
that are less than 0.667. The closet value in Z -scoret a b l et oi ti s
0.65. The table value indicates that the are of the curve between -
0.65 and +0.65 is 48.43%. But we want the area that is less than a
Z-score of 0.65.
The two halves of the normal distribution are mirror images
of each other. So if the area for the interval from -0.65 and +0.65 is
48.43%, then the range from 0 to +0.65 must be half of that
48.4324.215%2 .
We also know that the area for all scores less than zero is
half (50%) of the di stribution.
Therefore the area for all scores upto 0.65, 0.65 = 50% +
24.215% = 74.215%
So, the Mango is at approximately the 74thpercentile.
Students t distr ibution :
In case of large sample test Z -test is
0,12/XZN
n
 :
If2population variance is unknown then sample
variance S2is used and normal test is applied. But when sample is
small, the unbiased estimate of the population variance is used i.e.
Unbiased Variance of sample221XXSn
Biased Variance22XXSn
In small samples2is replaced by2Sand not by2S.
Student t : If, ,...........12xx xnis a random sample of size nf r o ma
normal population with meanand variance2then students t
statistic is given by
WhereX=S a m p l em e a nPopulation meanmunotes.in

Page 42

422XtSn
221XXSn
4.4USES OF T -TEST
1)t-test for single Mean :
It is used to test the hypothesis that the population meanhas specified value of0when populati on standard deviationis not known and30nwe use t -Test.2XtSn
If follows t -distribution with (n -1) degree of freedom
221XXSn
Steps for applying t-Test :
a)Set up the null hypothesis :00H
alternative hypothesis :10H(Two tailed test)
:10Hor0(one failed test)
b)Find221XXSnor 221XX n S  where2Sunbiased variance.
Biased Variance22XXSnor 22X X nS
where2Sbiased variance.
Since 221XX n S 2nS
or221SSnn221nSSnmunotes.in

Page 43

43c)Use the values in t -Test and compare calculated value with
table value for1Vndegree of freedom.
d)If calculated value is greater than tabl e value accept1Hand
vice versa.
Suppose a group of 5 students has weight 42, 39, 48, 60
and 41 kg. Can it be said that this sample has come from the
population whose mean weight is 48 kg?
Solution :
Weight (X)XX
1 42 16 (42 -46)2
2 39 49 (39 -46)2
3 48 4( 4 8 -46)2
4 60 196 (60 -46)2
5 41 25 (41 -46)25n230X2290XX
230465
2290 290 272.515 1 4XXn
XXSn
  
Where: 480H (No significant difference between sample
mean and population mean):4 81H(Significant difference between sample and
population mean)146 482 72.55Xt
S
n  
220.5253.81 14.5t
Table value of t at 5% le vel of significance for two tailed
test for V = 5 -1=4i s2 . 7 7 6 .
0.05,42tVwe accept0Hand conclude that the mean
weight of the population is 48 kg.munotes.in

Page 44

44ii)t-Test for difference of means:
Suppose two ind ependent samples have been taken from
two normal population having the same mean, the population
variance are also equal & hypothesis :0Hxywhere two
samples have come from the normal population with the same
means.1112XYt
Snn

222,,212XX Y YXYXYSnn n n   
Let us discuss this with the help of the following example.
In an examination 12 students in Class A had a mean score
of 78 and standard deviation is 6 whereas 15 students in Class B
had a mean score of 7 4 with standard deviation 8. Is the significant
difference between the means of the two classes?
Solution :
212 78 61
215 74 82nX S x
nY S y:0Hxy(no significant difference between the means of the
two classes)
:1Hxy(Significant difference between the means of the two
classes)

11
12
22
2
2 12
222211XYt
Snn
XX Y YSnn
XXS or X X n Sxxn


 

Similarly 222YY n Sxmunotes.in

Page 45

45
22 2 212 2
11
212 6 15 82 12 15 2432 960 139255.6825 25
7.46
78 74 4
7.46 0.15 117.4612 15
43.481.15xynS n S
Snn
S
S
t
t   
 

 


Table value of t for122 25Vnnat 5% level of
significance for 2 taile d tests is 2.064.
7 0.05252t
tvi.e.3.48 2.064Therefore Accept H 1and conclude that there is significant
difference between the sample mean.
iii)t-Test for difference of means with dependent samples
(paired t -Test):
This test is applicable when two samples are dependent.
Following are the conditions to apply this test :
Two samples should be of equal size12nnSample observations of X and Y are dependent in pairs.
The formula for pair ed t-Test is
2222 1,1ii
idt
S
n
dddS dnn n       
idx y(&xysample observations) i.e. difference
between each matched pair.
Suppose ,a test is conducted for 5 students in a coaching
centre to know the subje ct knowledge of the students before and
after tutoring for one month.munotes.in

Page 46

46Students 1 2 3 4 5
Results before test 110 120 123 132 125
Result after test 120 118 125 126 121
Is there any change in result after tutoring?
Solution :XiYidX Y2id110 120 -10 100
120 118 2 4
123 125 -2 4
132 136 -4 16
125 121 4 1610id2140id222
21025
1
1
10 114051 530i
i
iddn
dSdnn           
0:xyH(mean score before and after tutoring are same)
1:xyH(mean score before & after tutoring are not same)
220.81630
5d
t
S
n
Table value of t at 5% level of significance (2 tailed test) for15 1 4nvis 2.776.
* 0.05,42t
tv i.e.0.816 2.776munotes.in

Page 47

47Therefore0His accepted and conclude that there is no
significant difference in score of the students after one month of
tutori ng.
iv)t-Test for significance of an observed sample correlation
coefficient:
When r is a sample correlation & P is correlation for the
population which is unknown, t -Test is applied to test the
significance of correlation coefficient.2.
1.2r
rrtSErSEnLet us assume that a coefficient of correlation of sample of
27 pair of observation is 0.42. Is it likely that variables in the
population are not correlated?
Solution :
In our example,
Let0:0HP(the var iables in the population are
uncorrelated)
1:0HP(variables in the population are correlated)
2
22..1
0.4227 21 0.42
0.4225
.8236
2.315
2 25rrtnSErr
t
t
t
Vn


 
 




Table value of t for 25 degree of freedom is 2.060.25 25t t for VTherefore1His accepted & conclude that variables in the
population are correlated.munotes.in

Page 48

484.5F-TEST
F statistic is ratio of two independents chisquare variate
divided by their respective degree of freedom. Critical values of F
test are based on rig ht tailed test which depends on1V(degree of
freedom for numerator) and2V(degree of freedom for
denominator)
12,FVVF-Test is used to test the equality of population variances.
Where22201 2:H(population variances are same)
2
1
2
2Larger estimate of population varianceSmaller estimate of population varianceSF
S
Where21Sand22Sare unbiased estimates of common p opulation
variance2and are gives by
22
1
11XXSn
and22
2
21YYSn

Where111Vnand221Vn.
This test is also called variance ratio test
22
1
11XXSnand22
2XXSn

22
1122
111 Sn XXnS XX


R.H.S. are equal. Hence L.H.S. are also equal 2211 1 21Sn n S
Similarly we can find relation between22Sand22S.
Assumption of F -Test
Sample should be randommunotes.in

Page 49

49Sample observations should be independent
Sample should be taken from normal population
Let us discuss F -test with the help of the following example.
Suppose Two samples gave the following results.
Sample Size Mean Sum of the squares of deviation from mean
1 10 15 90
2 12 14 108
Test the equality of sample variance.
Solution :
Let2201 2:H(Difference in variances of two samples is not
significant)
Given 22
1210, 12 90 108nn X X Y Y   

2
2
1
1
2
2
2
290 90101 10 1 9
108 1089.821 12 1 11XX
Sn
YY
Sn
 

 

Apply F -Test,
2
1
2
2101.029.82SF
S
For1119Vnand221 11Vn0.052.90F 
Since.05FF
There0His accepted and conclude that there is no
significant difference in the variance.munotes.in

Page 50

504.6CHI-SQUARE TEST
Properties of2xdistribution.
1)Moment Generating function2xdistribution is
 2/212nxttwith parameters12and2n.
2)Mean of2xdistribution is ‘n’.
3)Variance of2xdistribution is ‘2n’
4)Skewness of2distribution is180ni.e.2distribution is
positively skewed. But as1,0n , the distribution
becomes normal.
5)Kurtosis of2distribution is2120ni.e.2distribution is
Leptokurtic.
But as2,0n , the distribution tends to Mesokurtic.
6)2distribution tends to normal distributionn.
7)The sum of independent Chi -square variate is also a chi -square
variate.
Application of Chi -square distribution
i)Goodness to fit :
This test is used to test if the experimental results support a
particular hypothesis or theory.
Assuming Null hy pothesis that there is no significant
difference between the observed and expected frequencies. Chi -
square distribution with1Vndegree of freedom.
22iiiOE
E
WhereiOobserved frequency
iEexpected or theoretical frequency.
Steps to compute2test : -
Consider null hypothesis0Hthat the theory fits the data well.
Compute the expected frequenciesiEcorresponding to the
observed frequenciesiOunder the considered hypothesis.
Compute 2iiOEmunotes.in

Page 51

51Divide the square of the deviation 2iiOEby the
corresponding expected freque ncies i.e. 2ii iOE E
Add the values obtained in the above step to
Calculate :
22 ii
iOE
E    
Calculate degree of freedom i.e.1VnFind the table value of2for1ndegree of freedom at certain
level of significance.
Compare the calculated value of2to the table value, if
20.05tthen accept the null hypothesis and conclude that
there is good fit between theory and experiment.
If calculated value of20.05tthen reject the null hypothesis &
conclude that the experiment does not support the theory.
Chi-square test can be used under following condition :
1)The sample observations sh ould be independent.
2)iiOE N3)The total frequency N should be greater than 50 i.e. N > 50
4)No expected frequency should be less than 5. If any expected
cell frequency is less than 5 then we cannot use2test. In that
case, we use pooling techniques where we add the frequencies
which are less than 5 with succeeding or preceding frequency
so that sum process more than 5 and adjust -degree of
freedom accordingly.
5)The given distribution should not be re placed by relative
frequencies or proportions but the data should be given in
original units.
Let us discuss this with the help of an example.
A sample analysis of examination results of 450 final year
degree students was made. It is found in the a nalysis that 200
students have failed, 160 have got pass class, 75 got second class
and only 15 students have got first class. Find out whether these
figures are consistent with the general final year degree
examination result which is in the ratio of 4:2: 2:1 for the above
mentioned categories respectively.munotes.in

Page 52

52Solution :
Assuming null hypothesis0Hthat the figure are consistent
with the general examination result.
Category Observed
frequencyiOExpec ted
frequenciesiE2iiOE2iiiOE
E
Fail 200 180 400 2.22
Pass 160 135 625 4.63
Second 75 90 225 2.5
First 15 45 900 20450iO450iE29.35
Expected frequencies :
Failed :4/1 0 4 5 0 1 8 0Pass :3/1 0 4 5 0 1 3 5Second :2/1 0 4 5 0 9 0First :1/10 450 452
229.35ii
iOE
E    
  
d.f. = 4 -1= 3
Table value of2at 5% level of significance for df 3 =
7.815. Since calculated2value is greater than the table value i.e.
20.05t
29.35 > 7.8150His rejected at 5% level of significa nce and conclude that
the figures are not consistent with the general final year degree
examination result.
ii)Chi-square test for independence of Attributes suppose the
given population has N items, divided into ‘p’ mutually disjoint and
exhaustive c lasses.12, ,..........PAA Awith respect to the attribute A.
So that randomly selected item belongs to one and only one of the
attributes12, ,..........PAA A. Similarly suppose the population is
divided into ‘q’ mutually disjoint and exhaus tive B. So that randomly
selected items posseses one and only one of the attributes
12, ,..........qBB B. The frequency distribution of the items belonging tomunotes.in

Page 53

53the classes12, ,..........PAA Aand12, ,..........qBB Bcan be represented
aspq.
Steps for the test -
Consider null hypothesis that two attributes A and B are
independent.
Compute the expected frequenciesiECorresponding to the
observed frequenciesiOExpected frequency forijABijijABFA Bn where1, 2,.....,1,2,.....,ipjq   Computer 2.....iiOEDivide the square of the deviations 2iiOEby the
corresponding expected frequency i.e.2ii iOE E
Add the values obtained in the above step to calculate
22 .....ii
iOE
E    
Calculate degree of freedom11rCrNo. of rows C = No. of columns
Compute the calculate dv a l u e2with the table value for11rCdegree of freedom at certain level of significance. If
the calculated value of2is greater than the table value of2the null hypothesis is accepted and vice versa.
Let us discuss this with the help of following example.
The following data on vaccination is collected in a
government hospital to find out whether vaccination reduces the
severity of attack of inf luenza.
Degree of Severity
Very Severe Severe Mild
Vaccinated 10 150 240
Not Vaccinated 60 30 10
Use2-test, to test the association between the attributes.munotes.in

Page 54

54Solution :
Observed frequencies
Very Severe Severe Mild Total
Vaccinated 10 150 240 400
Not Vaccinated 60 30 10 100
Total 70 180 250 N=5 0 0
Assume the null hypothesis that the two attributes are
independent i.e. Vaccine is not effective in controlling the severity
of attack of influenza. The expected frequencies are as follows :
Expected Frequencies
Degree of Severity
Very Severe Severe Mild Total
Vaccinated70 40050056180 400500144250 400500200400
Not Vaccinated 70-56 = 14 180-144
=3 6250-200
=5 0100
Total 70 180 250 N=
500
Compution of Chi squareiOiE2iiOE2ii iOE E
10 56 2116 37.786
60 14 2116 151.143
150 144 36 0.25
30 36 36 1
240 200 1600 8
10 50 1600 32500iO500iE500 230.179.1 12131 12 2df r cmunotes.in

Page 55

55Table value of2for 2 d.f. at 5% level of significance is 5.99
Computed value of2is greater than the table value of2,
it is highly significant and hence the null hypothesis the rejected.
Hence we conclude that both attributes are not independent ant
vaccination helps to reduce the severit y of attack of influenza.
iii)2-test for the population variance
To test if the given normal population has a specified
variance22, we assume the null hypothesis.2200:H
If123, , ,.......,nXX X Xis a random sample of size ‘n’ from the
given population, then under the null hypothesis0H, the statistic
222
22
00XXns
 follows2distribution with1nd.f. where22 1n
iXXSn

denotes the sample variance.
By comparing the calculated value of2with the table value
for1nd.f. at certain level of significance null hypothesis can be
accepted or rejected.
Let us discuss this with the help of following example.
Weight in kgs. Of 10 members in a Gym are given below :
36, 40, 45, 55, 47, 44, 56, 48, 53, 46
Can it be said that population variance is 20 square kg?
Solution :
Assume null hypothesis2
0: 20Hagainst the alternative
hypothesis2
1: 20HWeight (in kg)1XiXX 2iXX
36 -11 121
40 -7 49munotes.in

Page 56

5645 -2 4
55 8 64
47 -0 0
44 -3 9
56 9 81
48 1 1
53 6 36
46 -1 1470X2366iXX

2
2
2 2
22
20470471036636618.320i
iXXn
XX
Sm
nS X X
nS






Degree of freedom11 019nTable value of2for 9 .d.f. at 5% level of significance is
16.92.
Since calculated2is greater than table value of2at 5%
level of significance, null hypothesis is rejected and conclude that
the population va riance is not 20 sq.km.
4.7REFERENCE
S-Shyamala and Navdeep Kaur, ‘Introduce too y Econometrics’.
Neeraj R, Hatekar, ‘Principles of Econometrics : An Introduction
us in, R’
munotes.in

Page 57

57MODULE 3
5
ESTIMATED LINEAR REGRESSION
EQUATION AND PROPERTIES OF
ESTIMATORS
Unit Structure :
5.0 Objectives
5.1 Introduction
5.2 The Estimated Linear Regression Equation
5.3 Properties of estimators
5.4 References
5.0 OBJECTIVES
To understand the concepts of simple linear regression model.
To understand the vari ous test in regression.
5.1 INTRODUCTION
Linear regression models are used to predict the relationship
between two variables. The factors which is being predicted is
called the dependent variable and the factors which are used to
predict the value of the dependent variable are called the
independent variables. So in this simple linear regression model, a
straight line approximates the relationship between the dependent
variable and the independent variable.
Assuming the two factors that are involved in simple linear
regression analysis are X and Y then the equation that describes
how Y is related to X is represented in the following formula for a
simple Linear Regression Model.
0 YXWhere,0and1,a r ep a r a m e t e r s
This equation contains an error term which is represented by. It is used to account for the variability in Y that cannot be
explained by the linear relationship between X and Y.munotes.in

Page 58

58For e.g. In economic theory, Consumption (C) is determined
by income (Y)00Cf Y Y
Here we assume that consumption depends only on income
(other determinants of consumption taken to be constant). But in
real world such exact relationshi p between C and Y never exists.
Therefore we add ‘’a ne r r o rt e r mi nt h ee q u a t i o nw h e r eis a random variable called residual error. The error arises from the
measurement errors in Y or imperfections in the specification of the
functionfY.
So the standard form of the simple linear regression model isii iYf X………………… (1)
01ii iYX………………(2)
whereiYdependent v ariableiXexplanatory or independent variable1slope parameter0intercept
It is based on the assumption that
a)the relationship between X & Y is linear
b)Assumption about the random disturbance ().
A regression line can show a positive linear relationship, a
negative linear relationship and no relationship.
i)Norelationship -The line in the graph in a simple linear
regression is flat (not sloped). There is no relationship between
the two variables.
ii)Positive relationship -Exists when the regression line slopes
upward with the lower end of the line at the y -intercept (axis) of
the graph and the upper end of the line extending upward into
the graph, away from the X -intercept (axis). There is a positive
linear relationship between the two variables representing that
as the value of one variable increases, the value of the other
also increases.
iii)Negative relationship -The regression line slopes downwards
with the upper end of the line at the y -intercept (axis) of the
graph and the lower end of the line extending downward into the
graph field, toward the X intercept (axis). There is a negativemunotes.in

Page 59

59linear relationship between the two variables i.e. as the value of
one variable increases, the value of the other decreases.
5.2 THE ESTIMATED LINEAR REGRESSION
EQUATION
If the parameters of the population were un known, the
simple linear regression equation could be used to compute the
mean value of y for a known value of X.01 EY XIn practice, however parameter values are generally
unknown so they must be estimated by using data from a sample of
the population. The population parameters are estimated by using
sample statistics. They are represented by0and1when these
sample statistics are substituted for the population parameters, the
estimated regression equation is formed as following.01ˆYX(noteˆYis pronounced y hat)
The graph of the estimated simple regression equation is
called the estimated regression line.
where0y-intercept of the regression line.1slopeˆYestimated value of y for a given value of X.
5.3 PROPERTIE S OF ESTIMATORS
There are different econometric methods with the help of
which estimates of the parameters are obtained. We have to
choose a good estimator which is close to the population
parameter. This closeness is determined on the basis of following
properties.
A)Estimator Properties for small sample are :
i)Unbiased :
The bias of an estimator is defined as the difference
between its expected value and the true parameter.
Bias =ˆEmunotes.in

Page 60

60If bias is O, an estimator is s aid to be unbiased i.e.ˆEA biased and an unbiased estimator of the trueis
explained in the following figure.
Figure No. 3.1
Unbiasedness is a desirable property and becomes
important only when it is combined with a small variance.
ii)Least Variance :
An estimator is best when it has the smallest variance as
compared with any other estimate obtained from other econometric
methods. Symbolically,ˆis best if.22** ˆˆ EE E E      
varˆiii)Efficiency :
An est imator is efficient when it possesses the various
properties as compared with any other unbiased estimator.ˆis efficient ifˆEand22** ˆˆ EE E E      
iv)Best, Linear Unbiased Estinator ( BLUE) :
An estimatorˆis BLU if it is linear, unbiased and has
smallest variance as compared with all the other linear unbiased
estimator of the true.munotes.in

Page 61

61v)Least mean square Error estimator (LMSE) :
An estimator is a minimum / least MSE if it has the smallest
mean square error defined as the expected value of the squared
difference of the estimator around the true population parameter.
MSE2ˆˆE
vi)Sufficiency :
An estimator is said to be sufficient estimator that utilise all
the information a sample contains about the true parameter. It must
use all the observations of the sample. Arithmetic mean (A.M.) is
sufficient estimator because it give mor ei n f o r m a t i o nt h a na n yo t h e r
measures.
B)Estima tor Properties for Large Sample :
They are required when the sample is infinitely large. These
properties therefore are also called as asymptotic
i)Asymptotic Unbaisedness :
An estimator is an asympto tically unbiased estimator of the
true population parameter, if the asymptotic mean ofˆis equal
to.ˆlimit
nEAsymptotic bias is an estimator is the difference between its
asymptotic mean and true parameter.
(Asymptotic bias ofˆ)=ˆlimit
nEIf an estimator is unbiased in small samples it is also
asymptotically unbiased.
ii)Consistency :
Anestim atorˆis said to be consistent estimator of the true
population ofif it satisfies two conditions.
a)ˆmust be asymptotically unbiasedˆlimit
nEb)the v ariance ofˆmust approach zero as n tends to infinity.ˆlimit 0nVariance 
munotes.in

Page 62

62If the variance is zero, the distribution collapses on the value
of the true population parameter.B o t ht h eb i a sa nd variance
should decrease as n increases.
iii)Asymptotic Efficiency :
An estimatorˆis said to be asymptotically efficient
estimator of the true population parameter, if :
a)ˆis consistent and
b)ˆhas smaller asymptotic variance as compared with any other
consistent estimator.
Statistical properties of Least Square Estimators :
Least square estimators are BLUE i.e. Best, Linear and Unbiased
estimator provided error term Uisatisfies some assumption. The
BLU properties of OLS (Ordinary Least Square) estimators are also
called Gauss Markov.
Theorem :
The BLU properties are shown in the following diagram.
Figure No. 3.2
The properties of the OLS estimates of simple linear
regression of the equation01ii iYX Uis based on the
following assumptions :
1)iUis a random real variable.
2)The mean value of U in any particular period is zero. i.e.iUOi.e.iEU O3)Assumption of Homo -scedasficity : i.e. the probability
distribution of U remains the same over all observations of X. i.e.
Variance ofiUis constant i.e.22
iUEU constant.munotes.in

Page 63

634)The random terms of different observation ofiUare
independent. i.e.,ijEUU O5)X’s are assumed to be fixed.
In a group of linear, unbiased estimators the OLS estimator.1ˆhassmallest variance i.e. they are best.
1)Linearity : The OLS estimators0ˆand1ˆ, are linear functions
of the observed values ofiY. Given the assumption that X’s appear
always w ith same values in repeated sampling process.
1ˆiiixyxWhere,x&ya r ei nd e v i a t i o nf o r mi . e .ixXX yY YLetiiixKx
1ˆiikyPut the value ofiiyY Y1ˆii i i ikY Y k Y Y k ……… (1)
But222i i
i
iiiXX x OkOxxx iXX OQ
Put the value ofikin equation (1) we get1ˆii ii i ii ikY O kY Y Y kY Y k  ……… (2)
Where11 2 2ii nnkY kY kY k Y
This implies tha t1ˆis a linear function ofiY. Becauseikdepends upon1sXand1sXare assumed to be fixed.
Similarly0ˆ,YXmunotes.in

Page 64

64Putting the value of1ˆfrom equation (2)
0ˆ ,ii
i
iiYX Yk Y XYXk Yn

01ˆiiXk Yn ……….. (3)
Thus both0ˆand1ˆare the linear functions of the1sY.
2)Unbiased :Both0ˆand1ˆare unbiased estimators.
i.e.11ˆand00ˆProof :1ˆiikY…………… (From 2)01ii ikX U  =01ii i i ikk X k U  ………….. (4)
2
20i
i
i
i
ii
ixkX
xkx X Xx  
 Q
Q
2
20
0i
i
i
iii iikX
k
xkX Xx




Putting the value ofiiXx X222 2201
0
1iiiiiiii ii
i
iixx XxX xkXXx xX
x
x
kX
 



Substituting the value0, 1ii ikk X in equation (4)munotes.in

Page 65

6510 1ˆ01iikU 11ˆiikU……………………….. (5)
Take expectations on both sides.

11
11ˆ
0
ˆiiiEk E UEU
E


Q
This is known as unbiasedness of the estimated parameter. Thus1ˆis an unbias ed estimator of1.
It is known that0ˆfrom OLS is
01ˆiiXk Yn ……………. (from 3)
 011ii iXk X Un
01 01iiii i i iXUXk Xk XXk Unn     ………. (6)
It is proved that0, 1ii ikk X By substituting these values in equation (6)
00ˆiiiUXk UnTaking expectation on both sides


00
00ˆ
0
ˆiiiiEUEX k E Un
EU
E
 


Q
This implies that0ˆis an unbiased estimator of0.
3)Minimum variance property :
Var211 1ˆˆˆEE   211 1 1ˆ ˆ EE Q2iiEk Usince11ˆiikU 211 2 2 ........nn Ek U k U k U…….. (see equation 5)munotes.in

Page 66

66 22 22 22
11 22 1 2 1 2 1 1
22
22........ 2 ...... 2
2
2nn n n nii i j i j
ij
ii i j i jEk U k U k U k k U U k k U UEk U k k U U
kEU k kEU U
  
   
 
 
 
Since22,0 ,ij i uEUU EU (Assumption)
Var221ˆiuK Var221ˆuiK2
2
2
2
2221i
i
i
i
iiixKx
xKxx
 
QVar212ˆ uixVar200 0ˆ ˆE    2
2
2
222 221
1
12ii
uiui iEX K Un
XKnXK X Knn
         
     


Since2210iiikkxVar2
2
021ˆ
u
iXnx    ………… (8)
Now221iXnx=2222
2
22i i
u
iiXX n Xxn X
nx nx      
 222
2
2
222
2
22
22ii
u
i
i
u
ixn X X X n Xnx
xn Xn X
nx
         
   


munotes.in

Page 67

67Var22
02ˆiuixnxWe are interested in the least square estimators which have
the smallest variance.
Let*1be another estimator of1.
*iiWYwhere consta ntiiWKbutii iWKC*
10 1
01
*
10 1ii iiiii i
i iiWX UWW X W UEW W X 
 
  
  
 
0iEU  Q Assumption.*11Eif and only if0iWand1iiWX0ii i i iWK C K CBut0iK00iiWCHence0iCand0iW11ii i i i
ii iiWX K C X
KX CX 
ButiiKX=1
1111 0ii
iiCX
CX
Hence0iCand0iiCXVar 2** *
11 1EE   2*
11E
Var 2 **
11 1ii iiEW U W U    Q222 22 22
11 22 ...... 2nn ij ijEWU WU WU W W U U munotes.in

Page 68

68Var*2 2
1 2ii i j i jEWU W W U U 222ii i j i jWEU W W EU U  Since221 0,ij uEU U U(Assumptions)
Var*2 2
10uiw  Putting the values of2iwVar 2*2
1 ui ikC 
Var*2 2 2
1 2ui i i ikC k C Var2
*2 2
1 200u
ui i i
iCk Cx  Q
Var*
11ˆ+ constant20iCQ
Var*
11ˆIt implies that OLS estimator has the minimum variance.
Similarly, let us take a new estimator*0, which is assumed
to be a linear function of theiYand unbiased.ii iwkCLet*
01iiXw Yn where**00iiwkonly if0iwandiiwXz.
It implies that0iCand0iiCXVar2*2
01
uiXwn 
 2
222 22 1
10i
ui
ui iXwXwnn
Xk C wn
 
 
   
    Q
Since22 22iiii iwkC k C 
But22 2110ii ikC w K k Cmunotes.in

Page 69

69222
2
2
222 2211
1ui
iuu iiXCnx
XXCnX
           
  
   
Var*
0Var0ˆ+ a positive constantsVar*0>V a r0ˆThus it is proved that the OLS estimators are BLU.
The standard error test of the estimators0and1.
The least square estimates are obtained from a sample of
observations. So sampling errors are inevitable to occur in all
estimates. Therefore to measure the size of the error it becomes
necessary to apply test of significance. Let us discuss the standard
error test. It helps us to decide whether the estimates are
statistically reliable or not. To test the null hypothesis.
01:0Hagainst the alternative hyp othesis.
11:0Hwhere

2
4
11 2224
002ˆ ˆ
ˆ ˆiiiS VarxXS Varnx 
  
 When the standard error is less than half of the numerical value of
the parameter estimate  111ˆˆ2S   , we conclude that the
estimate is statistically significan t. Therefore we reject the hull
hypothesis & accept the alternative hypothesis i.e. the true
population parameter1is different from zero.
If the standard error is greater than half of the numerical
value of the parameter est imate  111ˆˆ2S   , we conclude
that the null hypothesis is accepted and the estimate is not
statistically significant.munotes.in

Page 70

70The acceptance of null hypothesis implies the explanatory
variable to which the estimate relates does not effect the dependent
variable. i.e. there is no relationship between Y and X variables.
5.4REFERENCE
S-Shyamala and Navdeep Kaur, ‘Introduce too y Econometrics’.
Neeraj R, Hatekar, ‘Principles of Econometrics : An Introduction
us in, R’

munotes.in

Page 71

716
TESTS IN REGRESSION AND INTERPRETING
REGRESSION COEFFICIENTS
Unit Structure :
6.0 Objectives
6.1 Introduction
6.2 Z-Test
6.3 t-Test
6.4 Goodness of fit2R6.5 Adjusted R squared
6.6 The F -test in regression
6.7 Interpreting Regression Coefficients
6.8 Questions
6.9 References
6.0 OBJECTIVES
To understand the meaning of adjusted R squared.
To use the F-test in regression.
To interpret the regression coefficients.
6.1INTRODUCTION
Regression coefficients are a statistical tool or measure of
the average functional relationship between two or more than two
variables. In the regression analysis, one variable is dependent and
other variables are independent. In short ,i tm e a s u r e st h ed e g r e eo f
dependence of one variable on another variable.
Regression coefficient was used first to estimate the
relationship between the heights of father’s and their sons.
Regression coefficient denoted by b.
6.2Z-TEST :
The Z test of the least squares estimates is based on
standard normal distribution and is applicable when the population
variance is known or the population variance is unknown if the
sample is sufficiently large i.e.30n.munotes.in

Page 72

72Assuming The null hypothesis0:0HAlternative hypothesis1:0HThen the least square
estimates0ˆand1ˆhave the following normal distribution.
0
12
2
ˆ 0022
ˆ 11 2ˆ ,
1ˆ ,u
i
u
iXNnxNx

        :
:
After transforming it into0,1ZN:
0,1i
iXZN:
iXvalue of the variable which is to be normalisemean of the distributionstandar dd e v i a t i o n

0
1* 00 00
22ˆ
* 11 11
22ˆˆ ˆ0,1/
ˆ ˆ
0,1
/ui i
uiZN
xn x
ZN
nx
 


 
 
:
:
Given the calculated value of*Z, we select the level of
significance to decide the acceptance or rejection of null
hypothesis. Generally speaking, in econometrics we choose 5% or
1% lev el of significance. i.e. we tolerate / consider 5 times out of
100 to be wrong while making decisions.
We perform a two tail test i.e. critical region for both tails of
standard normal distribution. For i.e. for 5% level of significance,
each tail will include area 0.25 probability. The table value of Z
corresponding to probability 0.25 at each end of the curve or both
the tails is11.96Zand21.96Z
To conclude we compare the observed value*Zwith the
table value of Z. If it falls in the critical regions. i.e. if*1.96Zor
*1.96Z, we reject the hull hypothesis. In case if it is outside of
the critical region, i.e.*1.96 1.96Z ,we accept the hull
hypothesis.munotes.in

Page 73

73in econometrics, it is customarily to test the hypothesis that
true population parameter is zero.
01:0Hand is tested against the alternative hypothesis.
11:0H.
To test the above null hypothesis,0in the Z transformed
formula.
11 1*11 1 1ˆ ˆ ˆˆ ˆˆ0Z  
If*1.96Zwe accept1Hand reject0H.
Given the 5% level of significance the critical value of Z is
1.96 which is approximately equal to 2.0. In standard error test we
reject null hypothesis if
11ˆ2. In case of 2 test it*2Zwe reject
null hypothesis. The two statemen ts are identical because
1* 1
ˆˆ2Z

(if we accept1H)o r
11ˆˆ2.
Thus standard error test and 2 tests give the same result.
6.3T TEST -
t Test includes the variance estimates2XSinstead of true
variance2X.S ot h ef o r m u l ai sa sf o l l o w i n g:
i
XXutSwith (n -1) degrees of freedomuvalue of population mean
2
XSsample estimate of the population variance22/1XiSX X n nsample size.
The sampling distribution in2,XXN u S: and the
transformation statistic is2//XXu S n and has t distribution
with1ndegrees of freedom.
We have least square estimates as :munotes.in

Page 74

742
2
002ˆ ˆ,i
uiXNnX    : and
122
ˆ 1121ˆ ˆˆ,uiNX    :
From this the t statistic for0ˆand1ˆare obtained from a
sample reduces to**000ˆˆˆt and**111ˆˆˆt withnkdegrees
of freedom.0ˆand1ˆleast squares estimates of0and1respectively.*0and*
1hypothesised value of0and.
02
ˆˆestimated variance of0(from the regression)
12
ˆˆestimated variance of1nsample sizeKtotal number of estimated parameters
(in oure case of K = 2)
Assuming The n ull hypothesis is00:0HThe alternative hypothesis10:0H
0*0ˆˆtSThen the calculated*tvalue is compared to the table values
of t with n -Kd e g r e e so ff r e e d o m .
If*0.025tt, we reject the null hypothesis, i.e. we accept that
the estimate0ˆis statistically significant.
When*0.025tt, we accept the null hypothesis, that is, the
estimate0ˆis not statistically significant at the 5% level of
significance.
Similarly for the estimate1ˆ.
Null hypothesis01:0Hand Alternative hypothesis
11:0Hmunotes.in

Page 75

75
1*1ˆˆtSIf*0.025ttwe reject the null hypothesis and we conclude that
the estimate1ˆis statistically significant at 5% level of significance.
If*0.025ttwe accept the n ull hypothesis that is, we conclude
that the estimate1ˆis not statistically significant at 5% level of
significance.
Confidence intervals for0ˆand1ˆThe t statistic for0ˆis
0**00ˆˆ
tSwith n -kd e g r e e so ff r e e d o m .
First we choose the 95 percent confidence level or and find t values
of0.025tfrom t table with n -Kd e g r e e so ff r e e d o m .T h i si m p l i e s
that the pr obability of t lying between0.025tand0.025tis 0.95.
Thus the 95 percent confident interval for0, small sample
for its estimation is0ˆ0.025t0ˆS00 0 0.025 ˆˆtSwith n -K
degrees of freedom or00 0 0.025 ˆˆtSwith n -Kd e g r e e so f
freedom.
Similarly, for the estimates of1ˆ,
*ˆˆt
S
with n -Kdegrees of
freedom.
The confidence interval 95 percent level is1ˆ0.025t
1ˆS11 1 0.025 ˆˆtSwith n -Kd e g r e e so ff r e e d o mo r
11 1 0.025 ˆˆtSwith n -kd e g r e es of freedom.
6.4GOODNESS OF FIT2RA measure of goodness of fit is the square of the correlation
coefficient2R, which shows the percentage of the total variation
of the dependent variable that can be explained by the independent
variable (X).munotes.in

Page 76

76Since,
TSS = RSS + ESS
TSSTotal sum of squares =2iyRSSResidual sum of squares =2ieESSExplained sum of squares =21ˆixandiyYYand
ixX X.
The decomposition of the total variations in Y leads to a
measure of goodness of fit, also called the coefficient of
determination which is represented by :
2221 22ˆiiESSRTSSxRy
AsESS TSS RSS2TSS RSSRTSS222
2
2
21iii
i
iyeRy
e
y



Properties of2Ri)It is a non -negative quantity i.e. it is always pos itive20R.I ti s
calculated with the assumption that there is an intercept term in the
regression equation of Y on1Xii)Its limits ranges from21ORwhen20R,it implies no
relationship between dependent and explanatory variables.
When21R, there is a perfect fit.
iii)22RrFrom definition, r can be written as22iiiixyrxywhereixX Xmunotes.in

Page 77

77222
12ˆiixRyand112ˆiixyy22
222 22
22ii ii iii i iii
iixyxy xRxy x yxy
xy
  

 
  
   
22RrCorrelation coefficient2rR
While2Rvaries between 0 and 1 i.e.21ORrv a r i e sb e t w e e n -1
and + 1 i.e.11r, indicating negative correlation and positive
linear correlation respectively, at the two extreme values.
6.5ADJUSTED R SQUARED
The R squared statistic suffers from a major drawback. No
matter the number of variables we add to our regression model the
value of R square never decreases.
If either remains same or increases with the new
independent variable even though the variable is redundant. In
reality, its resul t can not be accepted since the new independent
variable might not be necessary to determine the target variable.
So the adjusted R square deals with this problem.
Adjusted R squared measures the proportion of variation
explained by only those independ ent variables which are really
helpful in determining the dependent variable. It is represented with
the help of the following formula
Adjusted
2
211
11Rn
Rnk         Wherensample sizeknumber of independent variableRR squared values determined by the model
To conclude the difference between R square and adjusted
R square we may say that
i)When we add a new independent variable to a regression
model, the R-squared increase, even though the new independent
variable is not useful indeteming the dependent variable. Whereasmunotes.in

Page 78

78adjusted R squared increases only when new independent
variables is useful and affect the dependent variable.
ii)Adjusted R -squared can be negative when R -squared is close
to zero.
iii)Adjusted R -squared value always be less than or equal to R -
squared value.
6.6THE F -TEST IN REGRESSION
F-test is a type of statistical test which is very flexible. It can
be used in a wide var iety of settings. In this unit we will discuss the
F-test of overall significance. It indicates whether our regression
modd provides a better fit to the data than a model that contains no
independent variables. So here we will explain how the F -test of
overall significance fits in with other regression statistics, such as R
-square. R -square provides an estimate of the strength of the
relationship between regression model and the response variable.
It does not provide any formal hypothesis test for this rel ationship.
Whereas the overall significance F -test determines whether this
relationship is statistically significant or not. If the P value for the
overall F -test is less than the level of significance, we conclude that
the R -square value is significantly different from zero.
The overall F -test compares the model with the model with
no independent variables such type of model is known as intercept
only model. It has the following two hypothesis.
a)The null hypothesis -The fit of the intercept only m odel and our
model are equal.
b)Alternative hypothesis -The fit of the intercept -only model is
significantly reduced compared to our model.
We can find the overall F -test in the ANOVA table.
Table : ANOVA
Source DF Adj SS Adj MS F-Value P-Value
Regression 3 12833.9 4278.0 57.87 0.000
East 1 226.3 226.3 3.06 0.092
South 1 2255.1 2255.1 30.51 0.000
North 1 12330.6 12330.6 166.80 0.000
Error 25 1848.1 73.9
Total 28 14681.9munotes.in

Page 79

79In the above table, compare the p -value for the F -test our
significance level. If the p -value is less than the significance level,
our sample data provide sufficient evidence to conclude that our
regression model fits the data better than the model with no
independent variables.
6.7INTERPRETING REGRESSI ON COEFFICIENTS
Regression coefficients are a statistical tool or measure of
the average functional relationship between two or more than two
variables. In the regression analysis, one variable is dependent and
other variables are independent. In short ,i tm e a s u r e st h ed e g r e eo f
dependence of one variable on another variable.
Regression coefficient was used first to estimate the
relationship between the heights of father’s and their sons.
Regression coefficient denoted by b.
Basically, there are two types of regression coefficients, i.e.
regression coefficient of regression y on Xbyxand regression
coefficients of regression X on Ybxy.
Prope rties of Regression Coefficient :
Some important proper ties of regres sion coefficient are as
follows :
1)The both regression coefficients have the same sign. Ifbyxis
positive,bxywill be also positive and ifbyxis negative,bxywill be
also negative.
If,byx>0 ,bxy>0byx<0 ,bxy<0
2)If a regression coefficient is more than unity, the other
regression coefficient must be less than unity. If a regression
coefficient is more than -1, other regression coefficient must be
less than -1.
If,byx>1 ,bxy<1byx>-1,byx<-1
3)The geometric mean (GM) of two regression coefficients is
equal to the correlation coefficient.
rbbyx xy
Where,
r = correlation coefficientbyx= Regression coefficient of regres sion y on x.munotes.in

Page 80

80bxy= Regression coefficient of regression x on y.
4)Correlation coefficient and regression coefficient have the same
sign.
If,0, 0 & 0rbbyx xy0, 0 & 0rbbyx xy5)Arithmetic mean of two regression coefficients is equal to or
greater than correlation coefficient.rrbbyx xy6)Two regression lines intersects to each other on arithmetic
means of these variables.,XY
Computation of Regres sionC o e f f i c i e n t s :
Regression coefficients can be calculated from following
formulas.

.22
.22xy x yyx
yyxy x yxy
xxb
b





Steps :
For the calculation of regression coefficients have to follow
the following steps.
1)Take the sums of all observations of X an dYv a r i a b l e s,xy.
2)Take the sums of squares of X and Y variables22,xy3)Take the sum of products of all observations of X and Y
variablesxy.
4)Use the following formulas for calc ulating the regression
coefficients.munotes.in

Page 81

81

2222xy x yyx
yyxy x yxy
xxb
b





Example :
X 2 4 1 5 6 7 8 1 0
Y 3 1 5 7 8 9 0 5 4
Calculate thebyxandbxyfrom above information.
Solution :
X Y XY X2Y2
2 3 6 4 9
4 1 4 16 1
1 5 5 1 25
5 7 35 25 49
6 8 48 36 64
7 9 63 49 81
8 0 0 64 0
1 5 5 1 25
0 4 0 0 16
First take the sums of all observations of X and Y variables&xy2415678103431578905442x
x
y
y


Then, take sums of squares of X and Y variables22&xy241 612 53 64 96 4102196
2912 54 96 48 102 51 62270x
x
y
y



munotes.in

Page 82

82Now take the sum of products of all observations of X and Y
variablesxy.6453 54 86 3050166xy
xy
Now keep the above values in following equ ations and
calculate the regression coefficients.
Regression coefficient of Regression Y on X -.22xy x yyybyx



 166 34 42
2270 42
166 1428
270 1764
1262
1494
1262
14940.845


0.85byx
Regression coefficient of Regression X on Y -.22xy x yxxbxy


960166 34 422196 34166 1428196 11561262
960
1262



1.32bxy
So,0.85byx1.32bxymunotes.in

Page 83

836.8 QUESTIONS
Q.1X2 4 6 5 3 9 10Y4 2 5 7 8 0 4
Calculate regression coefficients (byxandbxy)
Q.2
X 4 5 6 8 9 10 7 6
Y 4 1 5 4 10 12 7 8
Calculate regression coefficients (byxandbxy)
6.9REFERENCE
S-Shyamala and Navdeep Kaur, ‘Int roduce too y Econometrics’.
Neeraj R, Hatekar, ‘Principles of Econometrics : An Introduction
us in, R’

munotes.in

Page 84

84MODULE 4
7
PROBLEMS IN SIMPLE LINEAR
REGRESSION MODEL :
HETEROSCEDASTICITY
Unit Structure :
7.0 Objectives
7.1 Introduction
7.2 Assumptions of OLS Method
7.3 Heteroscedasticity
7.4 Sources of Heteroscedasticity
7.5 Detection of Hetero scedasticity
7.6 Consequences of Heteroscedasticity
7.7 Questions
7.8 References
7.0 OBJECTIVES :
1.To understand the causes ofHeteroscedasticity.
2.To understand the detection of Heteroscedasticity.
3.To understand the consequences of Heteroscedastic ity.
7.1 INTRODUCTION :
In the previous unit, you learnt about simple linear
regression as meaning, estimation of simple linear regression
model etc. In this unit you learn about the problems in simple linear
regression model.
Simple regression mode l includes only two variables, so
simple regression model is also known as ‘ Two Variables
Regression Model. When we consider the linear relationship
between two variable sin the simple regression model, then it is
called as simple linear regression model. There are two methods
for the estimation of simple linear regression model which are
namely ordinary least square method (OLS) and maximum
likelihood principle. When OLS method is unable to use for the
estimation of simple linear regression model, maximum likelihood
principle is being used. But because of the following factors, OLSmunotes.in

Page 85

85method is appropriate for estimation of simple linear regression
model.
Merits of Ordinary Least Square (OLS) Method




Simple Linear Regression mode lh a sb e e nw r i t t e na sf o l l o w s :
12iiYX uiWhere, Yi=D e p e n d e n tV a r i a b l e
1= Intercept
2=S l o p e
Xi=I n d e p e n d e nt Variable
ui=R a n d o mV a r i a b l e
For the estimation of above simple linear regression if we
have to use the OLS method, then the study of assumptions of OLS
method become necessary.
7.2 ASSUMPTIONS OF ORDINARY LEAST SQUARE
(OLS) METHOD
Least Square principle is developed by German
mathematician Gaurs.
There are ten a ssumptions of OLS method. In short, we
discuss as below –
1.The regression model is linear in the parameters.
12iiYX uiIt isa simple linear regression model an dt h i sm o d e li sl i n e a r
in both (X, Y) Variables and parameters (1,2). In short, linearlyOLS Method is easy to understand
It is always used
Satisfied results
In the all methods it is important method
munotes.in

Page 86

86in the parameters is crucial for the use or application of least
square principle.
2.Xv a l u e s are fixed in repeated sampling :
Values taken by the regression X are assumed or
considered t o be fixed in repeated sampling :
11iiYX uiWhere Xi= Fixed / Constant
Yi=V a r i e s
Because of this assumption the regression analysis becomes the
conditional regression analysis.
3.Zero mean v alue of disturbance u i:
It means , expected value of the disturbance uiis zero.
Given the values of X, the mean or expected value of the
disturbance term (ui)i sz e r o .
Symbolically,
E(ui)=O Or E= ( ui/Xi)=O
4.Homoscedasticity or equal va riance of u i:Homo means
equal and scedasticity means spread. So Homoscedasticity means
equal spread. Given the values of X, the variance of uiis the same
for all observations
Symbolically,
Var (ui/Xi)=62
munotes.in

Page 87

87In the above figure, AB is the sp read of uiforX1, CD is the spread
ofuiforX2and EF is the spread of uiforX3,
So,
AB = CD = EF
It means, uiis Heteroscedastic –In this case, var ( ui/X1)≠ 62
5.No autocorrelation between the disturbance terms :
Given any two X values, XiandXj,(i≠ j)thecorrelation between
any two uianduj(i≠ j)i sz e r o .
Symbolically,
Cov ( uiuj/XiXj)=E[(ui–E(ui)/Xi(uj–E(uj)/Xj)]
=E[(ui/Xi(ujXj)]
Here ,E(ui)=O
E(uj)=O
=O
Here ,E(ui/Xi)=O
E(uj/Xj)=O
6.Zero covariance between u ia n d Xi:
Cov ( ui,Xi)=E[(ui–E(ui)(Xi–E(Xi)]
Here, E (ui)=O
=E[(ui(Xi–E(Xi)]
=E[(uiXi–E(Xi)ui]
=E(uiXi)-E(Xi)E(ui)
Here, E (ui)=O
=E(uiXi)
Here, Xi=non stochastic
=XiE(ui)
Here, E (ui))=O
Cov ( ui,Xi)=Omunotes.in

Page 88

887.The number of observation ‘n’ is greater than the number of
parameters (to be estimated).
8.Variability in X values :
The X variable is a given sample must not all be the same.
9.The regression model is correctly specified.
10.There is no perfect multicolinearity :It means that there is no
perfect linear relationship among the explanatory variables.
These are the ten important assumption of OLS method.
While using the OLS method for the estimation of simple
linear regression model, if assumption no. 4, 5 and 10 do not fulfil,
problems create in the simple linear regression model which are
namely heteroscedasticity, autocorrelation and multicolinearity.
Check your prog ress:
1.What are the ten principles of ordinary least square (OLS)
method?
7.3 HETEROSCEDASTICITY
The term Heteroscedasticity is the opposite term of
homoscedasticity; heteroscedasticity means unequal variance of
disturbance term ( ui).
E(ui2)=62Homoscedasticity
E(ui2)≠62Heteroscedasticity
Given the values of X, the variance of ui(Expected or mean
value of ui)t h a tE( ui), is the same for all observations. This as
assumption of OLS, principle which is useful for the estimation of
simple linear regression model.
E(ui2)=Var (ui)=62
If above assumption does not fulfil, then the problem of
heteroscedasticity arises in the estimation of simple linear
regression.munotes.in

Page 89

89Ex. If Income of individual increases, has saving increases but
the variance of saving will be t he same, it is known as
homoscedasticity
YSVar (S) = same Homoscedasticity
If the variance of saving will be variable, it is known as
heteroscedasticity.
YSVar (S) ≠ same Heteroscedasticity
7.4 SOURCES OF HETEROSCEDASTICITY
The problem of heteroscedasticity in the simple linear
regression model is arisen because of the following reasons.
1.The o ld technique of data collection :
While estimating the simple linear regression model by OLS
method, the old technique has been used for collecting the data or
information then the problem of heteroscedasticity creates in the
simple linear regression model.
2.Presence of Outliners :
The problem of heterosc edasticity creates because of the
presence of outliners. Because of it the variance of disturbance
term does not fix on same.
3.Incorrect Specification of the model :
If the model (Simple linear regression model specified
incorrect, the problem of hete roscedasticity arises in it.
7.5DETECTION OF HETEROSCEDASTICITY
There are mainly five methods on tests of the detection of
the problem of heteroscedasticity in the simple linear regression
model. With the help of these detecting methods of
heterosceda sticity, you will be able to find the problem of
heteroscedasticity in the simple linear regression model.
Graphical method
Park Test
Glejser Test
Spearman’s Rank Correlation Test
Goldfeld -Quandt Test.munotes.in

Page 90

901.GRAPHICAL METHOD :
For the detection of heteroscedasticity problem in the simple
linear regression model, in this method squared residuals (2ui)are
plotted against the estimated value of the independent variance
(iY).
In the graphical method, t here are mainly following four
patterns.
i)No Systematic Pattern :2uiOiY/iXIn the above graph, there is no systematic relationship
betweeniY/iXand2uiso, there is no heteroscedasticity.
ii)Linear Pattern :2uiOiY/iXAbove graph indicates the linear relationship betweeniY/iXand2uiwhich showed the presence of the problem of
heteroscedasticity.
munotes.in

Page 91

91iii)Quadratic Pattern :2uiOiY/iXAbove graph also shows, the presence of heteroscedasticity
in simple linear regression model.
iv)Quadratic Pattern :2uiOiY/iXAbove graph indicates that there is the present of problem of
heteroscedasticity. In short, when there is the systematic
relationship betweeniY/iXand2uithen there is the presence of
heteroscedasticity.
2.PARK TEST :
R. E. Park developed the test for the detection of
heteroscedasticity in the regression model which is known as Park
Test. R. E. Park developed this test in Econometrica in article
entitled ‘Estimation with He teroscedastic Error Terms’ in 1976.
Park said that, 6i2is the he teroscedastic variance of ui which
varies and the relationship between he teroscedastic variance of
residual s( 6 i2)a n de x p l a n a t o r yv a r i a b l e( X i ) .
6i2=62Xievi-(1)
In 6i2=I n 62+lnX i+Vi -(2)
munotes.in

Page 92

92Where,
6i2=H e terosceda stic Variance of ui
62= Homoscedastic Variance of ui
X= e x p l a n a t o r y v a r i a b l e
Vi= Stochastic te rm
If,6 i2is unknownm Park suggeted2ui(squared regression
residuals) instead of 6i2.
In2ui=In 62+lnX i+Vi -(3)
where, In 62-
In2ui=+lnX i+Vi -(4)
Criticisms on ParkTest:
Goldfeld and Quandt criticized that Park used the, Vi
Stochastic term in the process of detection of the problem of
heteroscedasticity which is or can be already he teroscedastic.
But Park has shown, Vi is a stochastic te rm which is
homoscedastic .
3.GLEJSER TEST :
H. Glejser developed the test for the detecting the
heteroscedasticity in 1969 in the article entitled ‘A New Test for
Heteroscedasticity ’ in Journal of the American Statistical
Association.
Glejser suggested that get the residuals value while
regressing on the data and the regress on residual value, while
regressing, Glejser used the following six types of functional form.
ui=1+2Xi+Vi -(i)
ui=1+2Xi+Vi -(ii)
ui=1+21Xi+Vi -(iii)
ui=1+21Xi+Vi -(iv)
ui=12Xi+Vi -(v)
ui=212Xi+Vi -(vi)munotes.in

Page 93

93Above first 4 equations are linear in parameters and last 2
equations are non -linear in parameters.
Glejser suggested above 6 functi onal forms for testing the
relationship between the stochastic term (Vi) and explanatory
variable (X).
According Glejser, first four equations (1, 2, 3, 4) give the satisfied
results because these are linear in parameter and last two
equations (5, 6) gi ve non -satisfied result, because these are non -
linear in parameters.
Criticisms on Glejser Test :
Goldfeld and Quandt criticized on Glejser test as below –
1. Glejser suggested six functional forms, in the last two
functional forms get the non –linear estimates while taking
variance of ordinary least square (OLS) estimates.
2. Vi is a stochastic term which can be heteroscedastic and
multicolinears and the expected value of Vi is non –zero.
E( V i ) ≠ O
4.SPEARMAN’S RANK CORRELATION TEST :
This test is based on the rank correlation coefficient. That is
why this test is known as Spearman’s Rank Correlation Test.
Spearman’s Rank Correlation Test indicates the absolute
value of uiand Xi. Spearman’s Rank Correlation is denoted by
rs.
Symbolically,
rs=1–6di
2)(1nn    Where,
rs=Spearman’s Rank Correlation Coefficient.
n=no. of pairs of observation ranked
di=the diffe rence in the ranks assigned to two different
characteristics of the 9th.
For detecting the heteroscedastic ity in the simple linear regression
model, following steps has been suggested by spearman.
Yi=1+2Xi+uimunotes.in

Page 94

94This is the simple linear regr ession model
Steps :
i) Fit the regression to the data obtains residuals ( ui).
ii) Ignoring the sign ofuirankuiin ascending
/descending form and compute.
t=2s-n27rrs    df =n-2
If computed value of t is greater than critical t value, there is the
presence of heteroscedastic ity in the simple linear regression
model.
Ifthe computed value of t is less than critical t value, there is the
absence of heteroscedastic ity in the simple linear regression
model.
5.GOLDFELD -QUANDT TEST :
Goldfeld and Quandt developed a test to detect the problem
ofheteroscedastic ity which is known as Goldfeld -Quandt test.
This test is depends on ‘there is positive relationship
between heteroscedastic ity ( 6i2)a n de x p l a n a t o r yv a r i a b l e( X i ) .
Steps :
There are mainly following 4 steps for detecting the problem
ofheteroscedastic ity.
1) Order or rank the observations according to the value of Xi
beginning with the lowest X value.
Ex.
Yi Xi
Yi Xi
20
30
40
50
6018
15
17
25
3030
40
20
50
6015
17
18
25
30munotes.in

Page 95

952) Omit central observations and divide the remaining (n -c)
observations into two groups
Yi Xi}A}BignoredYi Xi
30
40
20
50
6015
17
18
25
3030
40
50
6015
17
25
30
iii) Fit separate OLS regressions to the first observation and the
last observation (B) and obtain the respective residual sums of
squares RSS 1and RSS 2.
iv) Compute the ration -
F=2211RSS /df RSSRSS /df RSSCalculated value of F ration at the given level of significance
() is greater or more than given critical F value, the homoscedastic
hypothesis is rejected sand heteroscedastic hypothesis is
decepted.
Calculated
Fv a l u e >Critical
Fv a l u ePresence of the
Heteroscedastic ity
6.Other Tests for detecting the problem of Heteroscedasticity
i)Breush -Pagan -Godfrey Test
ii)White’s General Heteroscedasticity Test
iii)Koenker -Bassett (KB) Test.
7.6 CONSEQUENCES OF HETEROSCEDASTICITY
Consequences of using OLS for estimation o f simple linear
regression model in the presence of the problem of
heteroscedasticity are as follows -
1.In the presence of h eteroscedasticity , values of OLS estimators
do not change, but it affect on variance of estimators.
2.The properties of OLS est imators which are Linearity and
Unbiasedness do not change or vary in the presence of
heteroscedasticity , but there is lack of minimum variance, that is
why the estimators are not efficient.munotes.in

Page 96

963.Get the more confidence interval.
4.There is impossibility t o test the statistics significant of
parameter estimates because of the presence of
heteroscedasticity .
7.7 QUESTIONS
1.Explain any two tests in detection of h eteroscedasticity .
2.Explain the assumptions of OLS method of estimation of
simple linear regres sion model.
3.What is heteroscedasticity? Explain the causes and
consequences of heteroscedasticity.
7.8REFERENCES
Gujarati Damodar N, Porter Drawn C & Pal Manoranjan, ‘Basic
5Ecometrics’, Sixth Edition, Mc Graw Hill.
Hatekar Neeraj R. ‘Principles of Eco nometrics : An Introduction
(Using R) SAGE Publications, 2010
Kennedy P, ‘A Guide to Econometrics’, Sixth Edition, Wiley
Blackwell Edition, 2008

munotes.in

Page 97

978
PROBLEMS IN SIMPLE LINEAR
REGRESSION MODEL:
AUTOCORRELATION
Unit Structure :
8.0 Objectives
8.1 Introduction
8.2 Autocorrelation
8.3 Sources of Autocorrelation
8.4 Detection of Autocorrelation
8.5 Consequences of Autocorrelation
8.6 Questions
8.7 References
8.0 OBJECTIVES :
1.To understand the causes of Autocorrelation.
2.To understand the detect ion of Autocorrelation.
3.To understand the consequences of Autocorrelation.
8.1 INTRODUCTION :
While using the OLS method for the estimation of simple linear
regression model, if assumption 5 which is no autocorrelation
between the disturbance terms does not fulfil, the problem of
autocorrelation in the simple linear regression model arises.
8.2AUTOCORRELATION
The autocorrelation may be defined as ‘correlation between
residuals disturbances ( ui,uj).
The OLS method of estimation of linear regre ssion model
assumes that such autocorrelation does not exist in disturbances
(ui,uj).munotes.in

Page 98

98Symbolically,
E(ui,uj)=0
Here ,ij
In short, autocorrelation is a problem which creates while
using the OLS method to estimate the simple linear regression
model.
According to Tinmer ‘autocorrelation is tag correlati on
between two different series.’
8.3SOURCES OF AUTOCORRELATION
The problem of autocorrelation arises while estimating the
simple linear regression model by OLS method because of the
following reasons.
1.Time series that varies or changes slowly has a problem of
autocorrelation.
2.If some important independent variables are omitted from the
regression model, the problem of autocorrelation arises.
3.If the regression paradigm is framed in the wrong ma thema tical
form, then the successive values of the residual become
interdependent.
4.While taking averages of data, it becomes slow, that is why the
disturbance term indicates the problem of autocorrelation.
5.It the calculation proces s is done while searching for the missing
figure of the compound, this creates a problem of interdependence
between them.
6.In the regression model, when is disturbance term is incorrectly
arranged autocorrelation is formed.
8.4DETECTION OF AUTOCORREL ATION
There are mainly three methods to detect the problem of
autocorrelation as follows -
Graphical Method
The Runs Test
Durbin -Watson & Test
1.Graphical Method :
Whether is there the problem of autocorrelation? the answer
of this question will be got by the examining the residuals.munotes.in

Page 99

99There are various ways of examining the residuals :
1)We can simply plot the residuals against time which known as
the time sequence plot.
2)We can plot the standardized residuals against time and
examine for detec ting the problem of autocorrelation.
3)Alternatively, we can plot the residualstuagainst1tu.
Positive Autocorrelation :
Negative autocorrelation :
munotes.in

Page 100

100No Autocorrelation :
When, pairs of residual s are more in I and II quadrants, there is
the presence of positive autocorrelation.
When, pairs of residuals are more in II and IV quadrants, there
is the presence of negative autocorrelation.
When, pairs of residuals are equals in the all four quadrant s,
there is no presence of autocorrelation.
2.The Runs Test
The run test is developed by R. C. Geary IN 1970 in the
article entitled ‘Relative Efficiency of Count Sign changes for
Asserting Residual Autoregression in least squares Regression’ in
Biome trica.
The run test is also known as Geary test and it is non -
parametric test.
Suppose, there are 40 observations of residuals as follows -
(---------)(+++++++++++
++++ ++++ ++)(----------)
Thus, there are 9 negative residuals, followed by 21 positive
residuals followed by 10 negative residuals, for a total of 40
observations.
First let we know the concept of run and length of run.munotes.in

Page 101

101Run:
Run is an uninterrupted sequence of one symbol of att ribute
such as + or -.
Length of Run:
Length of run is the number of elements in the series.
In the above series -
N = Number of total observations
N= N 1+N2=40
N1= Number of positive residuals = 21
N2= Number of negative residual s=1 9
R= N u m b e r o f R u n = 3
Now taking the null hypothesis that the successive
residuals are interdependent and assuming that both N 1&N2
(N1>10, N2>10)t h en u m b e ro fr u n s( R )a r ef o l l o w sn o r m a l
distribution.
Mean :E ( f ) =122N N
Variance :6R2= 212 122N N (2N N - N)(N)
Now let we decide the confidence interval (CI) for R.
95% CI for R = E(R) 1.96 6 R
99% CI for R = E(R) 2.56 6 R
Take any confidence interval for R from above two.
Decision Rule -
If nu mber of Runs (R) lies in the preceding confidence
interval, the null hypothesis accepted.
If number of Runs (R) lies in the preceding confidence
interval, the null hypothesis rejected.
When we reject the null hypothesis, it means that residuals
exhibit autocorrelation and vicevarsa.
3. Durbin -Watson dTest:
The most celebrated test for detecting the autocorrelation or
serial correlation which is developed by statisticians Durbin and
Watson -in the article entitled ‘Testing for social, correlation in leastmunotes.in

Page 102

102squares regression in Priometrica in 1951. This test is popularly
known as the Durbin -Watson d statistic test.
Durbin -Watson dstatistic test as defined as -
d=221t=n
t=2
t=n
t=1()tt
tuu
u



Where,
Numerator = Sum of squares of differe nce of
continuing residuals
2
1(( ) )ttuu 

Denominator = Sum of squared residuals2()tu

Thus ,t h eD u r b i n -Watson d statistic is the ratio of sum of
squares of difference between continuing two residuals
2
1(( ) )ttuu 
 to the sum of squared residuals2()tu
.
Assumptions :
This test is based on the following some assumptions -
i) Regression model includes intercept term ( 1)
ii) Residuals follow the first order auto -regressive scheme.
ut=eut-1+vt
iii) This test assume that there is no tag value of dependent
variable in the regression model.
Yt=1+2Xt+3Yt-1+ut
iv) All explanatory variables ( X’s)a r en o n -stochastic.
v) There is presence of all obse rvations in the data.
d=2
2112
tt t t
tuu u u
u
   

munotes.in

Page 103

103Approximately,2tuand12tu
are same.
d=2
212tt t
tuu u
u



=2
2212
2tt t
ttuu u
uu 
  
 
 
=2
212
2tt
tuu
u
 


=212tt
tuu
u        

Where,
= 21XY2Xtt
tuu
ue



d=2(1e)
-1e1
The value ofeis between -1a n d1o re q u a lt o -1a n d1 .
0d4
The value of dis between 0 and 4 or sometimes equal to 0 and 4.
When,e=0,d=2 No Autocorrelation
When,e=1,d=0 Perfect Positive Autocorrelation
When,e=-1,d=4 Perfect Negative Autocorrelationmunotes.in

Page 104

104How to apply this test –
1.Run the regression and obtain the residuals()tu2.Compute d ( by using equation (1) ).
3.For given sample size and given number of explanatory
variables, find out the cr itical d Land d uvalue.
4.Then take decision about presence of autocorrelation by using
following rules.
Inthe term no decision, Durbin –Watson test remains
inconclusive. Thi s is the limitation of this test.
8.5CONSEQUENCES OF AUTOCORRELATION
1.When the problem of autocorrelation creates in the regression
model, we get linear, unbiased and consistent parameter estimates;
but we do not get minimum variance of parameter est imates.
2.In the presence of autocorrelation is regression model, we get
inefficient parameter estimates.
3.Hypothesis testing becomes invalid in the case of presence of
autocorrelation.
4.While estimating the regression model, variance of parameter
estimates is not minimum confidence intervals are big in the
presence of autocorrelation in regression model.No. If Null
HypothesisDecision
1. 0autocorrelation
2.dLdduNo decision No positive
autocorrelation
3. 4-dLautocorrelation
4.4–dud4-dLNo decision No negative
autocorrelation
5.dud4-duDo no reject No autocorrelation
(Positive/Negative)
munotes.in

Page 105

1055.If we ignore the presence of autocorrelation in the regression
model,26becomes less identified and determination c oefficient
becomes over identified.
8.6QUESTIONS
1.Explain the meaning and sources of autocorrelation.
2.Explain the detection of autocorrelation.
3.Explain the sources and consequences of autocorrelation.
8.7 REFERENCES
Gujarati Damodar N, Porter Draw n C & Pal Manoranjan, ‘Basic
Ecometrics’, Sixth Edition, Mc Graw Hill.
Hatekar Neeraj R. ‘Principles of Econometrics : An Introduction
(Using R) SAGE Publications, 2010
Kennedy P, ‘A Guide to Econometrics’, Sixth Edition, Wiley
Blackwell Edition, 2008

munotes.in

Page 106

1069
PROBLEMS IN SIMPLE LINEAR
REGRESSION MODEL:
MULTICOLLINEARY
Unit Structure :
9.0 Objectives
9.1 Introduction
9.2 Multicolinearity
9.3 Sources of Multicolinearity
9.4 Detection of Multicolinearity
9.5 Consequences of Multicolinearity
9.6 Summary
9.7 Questions
9.8 References
9.0 OBJECTIVES
1.To understand the causes of Autocorrelation.
2.To understand the detection of Autocorrelation.
3.To understand the consequences of Autocorrelation.
9.1 INTRODUCTION
While using the OLS method for the estimation of simple
linear regression model, if assumption 10 which is no perfect
multicolinearity does not fulfil, the problem of autocorrelation in the
simple linear regression model arises.
9.2MULTICOLINEARITY
You all studied the ten assump tions OLS (Ordinary Least
Square) method which are also assumptions of Classical Linear
Regression Model (CLRM). The tenth assumption of OLS method
is that there is no perfect linear relationship among the explanatory
variables (X’s)
The multicolinea rity is due to economist Ragner Frisch. The
multicolinearity is a existence of a perfect linear relationshipmunotes.in

Page 107

107between the some or all explanatory variables of a regression
model.
There are five types of degree of multicolinearity which have
been shown in the following figures.
If we consider, there are two explanatory variables namely
X2,X3and Y is dependent.
No Co linearity :
Low Co linearity :
Moderate Co linearity:
High Co linearity :
X2
X2Y
X3
X2Y
X3
X2Y
X3
Y
X3munotes.in

Page 108

108Very High Col ineari ty:
Why the OLS method or classical linear regression model
assumes that there is no t existence of multicolinearity ?The answer
of this question is that, if the multicolinearity is perfect, the
regression coefficients of the explanatory variable (X’s), the
regression coefficients of the explanatory variable (Xs) are
indeterminate and the standard errors are infinite. And if the
multicolinearity is less, the regression coefficients are determinate;
possess large standard error which means that the coefficients
cannot be estimated with accuracy.
9.3SOURCES OF MULTICOLINEARITY
There are mainly four causes or sources of mult icolinearity.
1. The data collection method is responsible to create the
problem of s multicolinearity. For example, sampling of limited
range of the values which taken by regressions in the population.
2. To constraints on the model which can be respons ible to
create the problem of multicolinearity.
3. Because of model specification, the problem of
multicolinearity arises.
4. Because of over identified , multicolinearity arises.
These are the major causes of multicolinearity.
9.4DETECTION OF MULTIC OLINEARITY
There is no specific method available for detection of
multicolinearity. Thus, following these rules are used to detect the
problem of multicolinearity.
1.High R2but few significant –Ratio’s.
2.High pair -wise correlations among regression s.
3.Examination of partial correlation.
4.Auxiliary Regression.
X2X3Y
munotes.in

Page 109

1091.High R2but few signif icant –Ratio’s :
IfR2(coefficient of determination) is high (more than 0.8),
the f test in most cases will reject the hypothesis that the partial
slope coeffi cients are simultaneously equal to zero, but individual t
tests will indicate, vary few of the partial slope coefficients are
statistically different from zero.
2.High pair -wise correlations among regressions :
If zero order correlation coefficient bet ween two independent
variables in the regression model is high, the nature of problem of
multicolinearity is high. But high zero order correlation coefficient is
not necessary condition, but it is complementary condition of the
presence of multicolinearity in regression model. If there are only
two explanatory variables regression model high zero order
correlation coefficient is the useful method for identifying the
presence of multi colinearity.
3.Examination of partial correlation :
The way or test or method of detecting the problem of
multicolinearity that is examination of partial correlation has
suggested Farror and Glauber. In this method, if we regress the y
on X, overall coefficient determination is very high; but other partial
R2is comparatively small and at least one variable is unnecessary,
that is the condition of the problem of multicolinearity.
4.Auxiliary Regression :
For identifying independent variables are correlated to which
independent variables, we have to by regressing each indepe ndent
variableXSi. Then we have to consider the relation between F
test, criterionfiand coefficient of determination2Riand for it
following formula has been used.

2/2
21/ 1RKifi
Rn ki
Where,2Ri= coefficient of determination for ithK=N u m b e ro fe x p l a n a t o r yv a r i a b l e sn= Sample sizemunotes.in

Page 110

1109.5CONSEQUENCES OF MULTICOLLINEARITY
Consequence so ft h et e r m multicolinearity are as follows :
1)OLS estimators show the BLUE properties, but variance &
covariance are very high.
2)Confidence intervals are so wider because of the high variance
and covariance. So ,null hypothesis (H O) does not accept easily.
3)t-ratio to one or more than one coefficients is not statistically
significant because of high variance and co -variance.
4)If t-ratios for one or more than coefficients are not statistically
significant, but we get very high value of R2.
5)In the presence of multicolinearity, estimators and its standard
errors can respond also to the small change or variation in the
data.
6)There is exactly linear correlation in the explanatory variables in
the model. So regression coefficients are indeter minate and
standard errors are infinite.
7)If there is imperfect linear correlation between exp lanatory
variable in the expla natory variables in the model and
regression coefficient are determinate, but standard errors are
so high.
9.6SUMMARY
When we consider the linearity in simple regression model or
two variable models , it is called as simple linear regression model.
There are two ways or methods for estimating the simple
linear regression model. When we use the ordinary least square
(OLS) m ethod for the estimation of simple linear regression model;
homoscedaticity or equal variance ofui, no autocorrelation between
the disturbance terms and no prefect multicolinearity these three
assumption are unable to fulfil, seque ntially the problem of
heteroscedasticity, autocorrelation and multicolinearity raise which
has been discussed in this unit.
9.7QUESTIONS
1.Explain the meaning and sources of multicolinearity.
2.Explain the detection of multico linearity.
3.Explain the sour ces and consequences of multicolinearity.munotes.in

Page 111

1119.8REFERENCES
Gujarati Damodar N, Porter Drawn C & Pal Manoranjan, ‘Basic
Ecometrics’, Sixth Edition, Mc Graw Hill.
Hatekar Neeraj R. ‘Principles of Econometrics : An In troduction
(Using R) SAGE Publications, 2010
Kennedy P, ‘A Guide to Econometrics’, Sixth Edition, Wiley
Blackwell Edition, 2008

munotes.in